Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: Need assistance and advice with NFS Server

  1. #1
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    32
    Distro
    Ubuntu

    Need assistance and advice with NFS Server

    This is kind of Hardware based question based. As it deals with the configuration of the server I thought this would be the best place for this to land.

    Raid configuration will be ZFS Raidz in alignment with Raid 5 / 6 just have not decided on which yet.

    The Computer is a Dell Optiplex 3010 (MT mid tower) so I am limited by the available PCIe slots (3 PCI-e 1x slots, and 1- PCI-e 16x slot) I know a OLD system.

    I have entertained the Idea of a PCI-e 1X card then use a 20 port expander, that met with the thought of flooding the PCI-e 1x bus. So I pretty much threw that thought out of my head as a folly and not worth considering.

    I am considering the LSI SAS 9300-16i 16 Port controller for this system. As I understand this is not a expander,
    The system will boot from the MB Sata ports.

    What I have not settled on is the Drives I will use:
    SAS , 2.5" SATA Enterprise, or Enterprise SSD's which will drive the form factor of the Cages I'll install in the 2x -5.25" bays.

    I have been looking at the Icy Dock 8 bay ( SAS/SATA compatible Express cage MB038SP) which will support up to a 7mm height 2.5" drive this setup x 2 will afford me plenty of bays for expansion. My thoughts here are Enterprise class SATA SSD's.

    OR If I go with SAS OR most SATA enterprise class mechanical hard drive the Icy Dock 8 bay cage is thrown out the system modification I'm looking hard at. this will reduce that to a 4 Bay cage (again ICY Dock SAS / SATA) that is SAS / Sata capable. This observation is the fact that most of the Mechanical drives (SAS or Sata) I'm seeing are approx 19mm in height.I have not noticed any offering of a SATA or SAS enterprise Class drive in a 9.5mm or smaller form factor.

    Advice on which drives technology / controllers / cages vendors - cost here is a major factor is what I'm seeking.

    What I have not entertained is the use of large capacity 3.5" sata drives (8TB and higher) because of the loss of advertised TB vs the actual available. Another reason is that I will max out at 5 drives 2 exposed in the 5.25" bays 3 hidden within the system vs all drives exposed via the 5.25" bays. I really like the Idea of accessing the drives without opening the case, preference mainly. My thought of the multiple 2.5" drives is that I would lose less to overhead (actual vs advertised capacity) than large capacity 3.5" drives I have NOT done the math on this so I could be VERY WRONG.

  2. #2
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Need assistance and advice with NFS Server

    I use ZFS RAIDZ2. Can lose up to 2 array members at one time, and still go on.

    Your first factor for consideration is what is your priority of storage? Reads or writes?

    Second is your budgetary constraints. It all costs money.

    There are many options for storage locations. If you want to access drives without having to open the case, I would consider a different case. I have an old Cooler Master Cosmos S case for my workstation. It has 10 x exposed 5-1/2" slots. For drive bays, I use one 4x3-1/2" drive hot-swap cage... that takes up 4 slots. Then I use two 6x2-1/2" drive cages. That gives me room for 4x3-1/2 drives, and 12 2-1/2 drives... that can all be accessed without opening the case.

    I have 12 fans in that case to cool the drives, CPU and GPU. That is important when you start adding many drives.

    If your main focus is for being a media server, then a set of large HDD's is fine. All it will do most of the time is just reads.

    I have used many SAS drives in the past. They run at 10K RPM, are fast and dependable. I have personally never had one fail. But they cost a lot. You can cut down your costs by using enterprise factory re-certified drives. It's hard to find very large SAS drives, size-wise. SAS controllers will support both SAS and SATA HDD's. SAS drives rated at the same speed as SATA will be faster, as there are more data paths.

    SSD drives are fast and dependable. Their cost has dropped dramatically for 8TB sized SSD drives and smaller. It will drop more as the above 20T ssd's become more common.

    NVMe is just fast (Big Period). It is more and more common to find NVMe at 4TB and lower.

    You can get creative with ow to add controllers and data paths. A good PCIe x16 lane 8 port SAS controller is 12Gbs and supports 128 drives. But if you never plan to use SAS drives, a good SATA HBA will do for HHD and SSD, for a lot cheaper. PCie x1 M.2 cards with an M.2 to SATA3 adapter will support 6 SATA drives...

    The next point of bottleneck contention lies with the motherboard's PCIe bus generation and the speed of it. You didn't say what your's was (Brand and model) , nor the Gen of.

    I could go on for pages... Even write your a book on it. Because you have not set any boundaries or limits of scope. Maybe you should limit your scope a bit to get this more focused and directed.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  3. #3
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Need assistance and advice with NFS Server

    Just just saw the Dell Optiplex 3010 MT as you base. My condolences.

    Remember where I said your bottleneck would be the Gen # speed of your PCIe lanes?

    First invest in another used motherboard on EBay and go from there...

    That board is PCIe Gen 2. That means your max speed would be around 500 MB/s for PCIe x1...
    Code:
    This is the theoricial maximum PCIe speeds by both PCIe generation and number of lanes, but note that due to system overhead and other hardware characteristics, real word numbers will be about 15% lower, and not exceed the rated speeds of the storage device itself.
    
    PCIe Revision   x1 Lane    x2 Lane    x4 Lane    x8 Lane    x16 Lane
    =============   ========   ========   ========   ========   ========
    1.0/1.1         250 MB/s   500 MB/s   1 GB/s     2 GB/s     4 GB/s
    2.0/2.1         500 MB/s   1 GB/s     2 GB/s     4 GB/s     8 GB/s
    3.0/3.1         1 GB/s     2 GB/s     4 GB/s     8 GB/s     16 GB/s
    4.0/4.1         2 GB/s     4 GB/s     8 GB/s     16GB/s     32 GB/s
    5.0             4 GB/s     8 GB/s     16 GB/s    32 GB/s    64GB/s
    Even your PCIe x16 is maxed out at 6.8 GB/s (max - 15%). But that is sort of a misnomer. That board has an H61 chipset (https://ark.intel.com/content/www/us...s-chipset.html). That chipset supports PCIe Gen 2.0 and a max of 6 lanes (so x1, x2, & x4 slots). But then again, that Dell MB does have what they say is a PCIe x16 slot. It only supports 4 SATA ports, which that is why that board only has 4 SATA ports.

    Your bus will be saturated.

    It is what it is. I think for a NAS, you should invest in a MB of at least PCIe Gen 3 for s starting foundation... Then you could think about many drives, and not worry as much about the I/O throughput.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  4. #4
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    32
    Distro
    Ubuntu

    Re: Need assistance and advice with NFS Server

    Quote Originally Posted by MAFoElffen View Post
    Just just saw the Dell Optiplex 3010 MT as you base. My condolences.

    Remember where I said your bottleneck would be the Gen # speed of your PCIe lanes?

    First invest in another used motherboard on EBay and go from there...

    That board is PCIe Gen 2. That means your max speed would be around 500 MB/s for PCIe x1...
    Code:
    This is the theoricial maximum PCIe speeds by both PCIe generation and number of lanes, but note that due to system overhead and other hardware characteristics, real word numbers will be about 15% lower, and not exceed the rated speeds of the storage device itself.
    
    PCIe Revision   x1 Lane    x2 Lane    x4 Lane    x8 Lane    x16 Lane
    =============   ========   ========   ========   ========   ========
    1.0/1.1         250 MB/s   500 MB/s   1 GB/s     2 GB/s     4 GB/s
    2.0/2.1         500 MB/s   1 GB/s     2 GB/s     4 GB/s     8 GB/s
    3.0/3.1         1 GB/s     2 GB/s     4 GB/s     8 GB/s     16 GB/s
    4.0/4.1         2 GB/s     4 GB/s     8 GB/s     16GB/s     32 GB/s
    5.0             4 GB/s     8 GB/s     16 GB/s    32 GB/s    64GB/s
    Even your PCIe x16 is maxed out at 6.8 GB/s (max - 15%). But that is sort of a misnomer. That board has an H61 chipset (https://ark.intel.com/content/www/us...s-chipset.html). That chipset supports PCIe Gen 2.0 and a max of 6 lanes (so x1, x2, & x4 slots). But then again, that Dell MB does have what they say is a PCIe x16 slot. It only supports 4 SATA ports, which that is why that board only has 4 SATA ports.

    Your bus will be saturated.

    It is what it is. I think for a NAS, you should invest in a MB of at least PCIe Gen 3 for s starting foundation... Then you could think about many drives, and not worry as much about the I/O throughput.
    Looking over your post and a bit of searching most of the MB's I'm seeing are using that same chipset H61.

    The Mainboard I'll be pulling out of that case is a Mini-ATX.

    Now if I go to the 4th Gen Intel processor and ditch the I5 3rd Gen processor with the Optiplex board, and go to the I7-4790K there is what I'm seeing two real good possible MB both mini-ATX .

    1st Option is the MSI B45M-E45 which I'm running one of those on my Windows Box for my CNC. (which prior to setting it upfor the CnC I had Ubuntu desktop on it ran like a top)
    Code:
    CPU
    Support
    ■ 4th Generation Intel® Core™ i7 / Core™ i5 / Core™ i3 / Pentium® /Celeron® processors for LGA 1150 socket
    Chipset
    ■ Intel® B85 Express Chipset
    Memory Support
    ■ 4x DDR3 memory slots supporting up to 32GB
    ■ Supports DDR3 1600/ 1333/ 1066 MHz
    ■ Dual channel memory architecture
    ■ Supports non-ECC, un-buffered memory
    ■ Supports Intel® Extreme Memory Profile (XMP)
    Expansion Slots
    ■ 1x PCIe 3.0 x16 slot
    ■ 2x PCIe 2.0 x1 slots
    ■ 1x PCI slot
    Onboard Graphics
    ■ 1x HDMI port, supporting the maximum resolution of 4096x2160@24Hz, 24bpp/ 2560x1600@60Hz, 24bpp/ 1920x1080@60Hz, 36bpp
    ■ 1x VGA port, supporting a maximum resolution of 1920x1200 @ 60Hz
    ■ 1x DVI-D port, supporting a maximum resolution of 1920x1200 @ 60Hz
    Storage
    ■ Intel® B85 Express Chipset
    - 4x SATA 6Gb/s ports (SATA1~4)
    - 2x SATA 3Gb/s ports (SATA5~6)
    - Supports Intel® Rapid Start Technology, Intel® Smart Connect Technology*
    * Supports Intel Core processors on Windows 7 and Windows 8.
    USB
    ■ Intel B85 Express Chipset - 4x USB 3.0 ports (2 ports on the back panel, 2 ports available through the internal USB 3.0 connector)
    - 8x USB 2.0 ports (4 ports on the back panel, 4 ports available through the internal USB connectors)
    Audio
    ■ Realtek® ALC887 Codec - 7.1-Channel High Definition Audio
    LAN
    ■ 1x Realtek® 8111G Gigabit LAN controller
    Back Panel Connectors
    ■ 1x PS/2 keyboard port
    ■ 1x PS/2 mouse port
    ■ 4x USB 2.0 ports
    ■ 1x HDMI port
    ■ 2x USB 3.0 ports
    ■ 1x VGA port
    ■ 1x DVI-D port
    ■ 1x LAN (RJ45) port
    ■ 3x audio jacks
    Mainboard connections
    ■ 1x 24-pin ATX main power connector
    ■ 1x 4-pin ATX 12V power connector
    ■ 4x SATA 6Gb/s connectors
    ■ 2x SATA 3Gb/s connectors
    ■ 2x USB 2.0 connectors (supports additional 4 USB 2.0 ports)
    ■ 1x USB 3.0 connector (supports additional 2 USB 3.0 ports)
    ■ 1x 4-pin CPU fan connector
    ■ 1x 4-pin system fan connector
    ■ 1x 3-pin system fan connector
    ■ 1x Clear CMOS jumper
    ■ 1x Front panel audio connector
    ■ 2x System panel connectors
    ■ 1x TPM module connector
    ■ 1x Serial port connector
    ■ 1x Parallel port connector
    ■ 1x Chassis Intrusion connector
    I/O Controller
    ■ NUVOTON NCT6779 Controller Chip
    Hardware Monitor
    ■ CPU/System temperature detection
    ■ CPU/System fan speed detection
    ■ CPU/System fan speed control
    BIOS
    Features
    ■ 1x 64 Mb flash
    ■ UEFI AMI BIOS
    ■ ACPI 5.0, PnP 1.0a, SM BIOS 2.7, DMI 2.0
    Or a Supermicro X10SLM-F

    Code:
    Feature X10SLM-F 
    PCH  
    Intel C224 Exp 
    
    PCI-E Slot 
    1 PCI-E 3.0 x8 in x16,
    1 PCI-E 3.0 x8,
    1 PCI-E 2.0 x4 in x8
    
    DIMM Support
    Up to 32GB ECC UDIMM DDR3- 1600MHz
    
    LAN CTRL
     Dual GbE LAN Ports 1xi210AT, 1xi217LM
    
    COM 
    One COM Port
    One COM Header
    
    SATA 
    4 SATA 6.0Gbs
    2 SATA 3.0Gbs
    
    RAID    RAID 0/1/5/10 (See pages 2-37 and 2-38 for details)
    USB Support
    4 USB 3.0 Ports,
    6 USB 2.0 Ports
    
    IPMI
     IPMI 2.0 (X10SLM-F)
    Unless I'm missing something in the specs I'm preferring the supermicro board what are your thoughts. What is causing me to kinda pick it over the MSI is the 2 PCI-E 3.0 slots versus one on the MSI.
    Thoughts & opinions please
    Last edited by sgt-mike; March 25th, 2024 at 02:13 AM.

  5. #5
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    32
    Distro
    Ubuntu

    Re: Need assistance and advice with NFS Server

    Quote Originally Posted by MAFoElffen View Post
    Just just saw the Dell Optiplex 3010 MT as you base. My condolences.

    Remember where I said your bottleneck would be the Gen # speed of your PCIe lanes?

    First invest in another used motherboard on EBay and go from there...

    That board is PCIe Gen 2. That means your max speed would be around 500 MB/s for PCIe x1...
    Code:
    This is the theoricial maximum PCIe speeds by both PCIe generation and number of lanes, but note that due to system overhead and other hardware characteristics, real word numbers will be about 15% lower, and not exceed the rated speeds of the storage device itself.
    
    PCIe Revision   x1 Lane    x2 Lane    x4 Lane    x8 Lane    x16 Lane
    =============   ========   ========   ========   ========   ========
    1.0/1.1         250 MB/s   500 MB/s   1 GB/s     2 GB/s     4 GB/s
    2.0/2.1         500 MB/s   1 GB/s     2 GB/s     4 GB/s     8 GB/s
    3.0/3.1         1 GB/s     2 GB/s     4 GB/s     8 GB/s     16 GB/s
    4.0/4.1         2 GB/s     4 GB/s     8 GB/s     16GB/s     32 GB/s
    5.0             4 GB/s     8 GB/s     16 GB/s    32 GB/s    64GB/s
    Even your PCIe x16 is maxed out at 6.8 GB/s (max - 15%). But that is sort of a misnomer. That board has an H61 chipset (https://ark.intel.com/content/www/us...s-chipset.html). That chipset supports PCIe Gen 2.0 and a max of 6 lanes (so x1, x2, & x4 slots). But then again, that Dell MB does have what they say is a PCIe x16 slot. It only supports 4 SATA ports, which that is why that board only has 4 SATA ports.

    Your bus will be saturated.

    It is what it is. I think for a NAS, you should invest in a MB of at least PCIe Gen 3 for s starting foundation... Then you could think about many drives, and not worry as much about the I/O throughput.
    Looking over your post and a bit of searching most of the MB's I'm seeing are using that same chipset H61.

    The Mainboard I'll be pulling out of that case is a Mini-ATX. Correction Micro-ATX.

    Now if I go to the 4th Gen Intel processor and ditch the I5 3rd Gen processor with the Optiplex board, and go to the I7-4790K there is what I'm seeing two real good possible MB both mini-ATX .

    1st Option is the MSI B45M-E45 which I'm running one of those on my Windows Box for my CNC. (which prior to setting it upfor the CnC I had Ubuntu desktop on it ran like a top)
    Code:
    CPU
    Support
    ■ 4th Generation Intel® Core™ i7 / Core™ i5 / Core™ i3 / Pentium® /Celeron® processors for LGA 1150 socket
    Chipset
    ■ Intel® B85 Express Chipset
    Memory Support
    ■ 4x DDR3 memory slots supporting up to 32GB
    ■ Supports DDR3 1600/ 1333/ 1066 MHz
    ■ Dual channel memory architecture
    ■ Supports non-ECC, un-buffered memory
    ■ Supports Intel® Extreme Memory Profile (XMP)
    Expansion Slots
    ■ 1x PCIe 3.0 x16 slot
    ■ 2x PCIe 2.0 x1 slots
    ■ 1x PCI slot
    Onboard Graphics
    ■ 1x HDMI port, supporting the maximum resolution of 4096x2160@24Hz, 24bpp/ 2560x1600@60Hz, 24bpp/ 1920x1080@60Hz, 36bpp
    ■ 1x VGA port, supporting a maximum resolution of 1920x1200 @ 60Hz
    ■ 1x DVI-D port, supporting a maximum resolution of 1920x1200 @ 60Hz
    Storage
    ■ Intel® B85 Express Chipset
    - 4x SATA 6Gb/s ports (SATA1~4)
    - 2x SATA 3Gb/s ports (SATA5~6)
    - Supports Intel® Rapid Start Technology, Intel® Smart Connect Technology*
    * Supports Intel Core processors on Windows 7 and Windows 8.
    USB
    ■ Intel B85 Express Chipset - 4x USB 3.0 ports (2 ports on the back panel, 2 ports available through the internal USB 3.0 connector)
    - 8x USB 2.0 ports (4 ports on the back panel, 4 ports available through the internal USB connectors)
    Audio
    ■ Realtek® ALC887 Codec - 7.1-Channel High Definition Audio
    LAN
    ■ 1x Realtek® 8111G Gigabit LAN controller
    Back Panel Connectors
    ■ 1x PS/2 keyboard port
    ■ 1x PS/2 mouse port
    ■ 4x USB 2.0 ports
    ■ 1x HDMI port
    ■ 2x USB 3.0 ports
    ■ 1x VGA port
    ■ 1x DVI-D port
    ■ 1x LAN (RJ45) port
    ■ 3x audio jacks
    Mainboard connections
    ■ 1x 24-pin ATX main power connector
    ■ 1x 4-pin ATX 12V power connector
    ■ 4x SATA 6Gb/s connectors
    ■ 2x SATA 3Gb/s connectors
    ■ 2x USB 2.0 connectors (supports additional 4 USB 2.0 ports)
    ■ 1x USB 3.0 connector (supports additional 2 USB 3.0 ports)
    ■ 1x 4-pin CPU fan connector
    ■ 1x 4-pin system fan connector
    ■ 1x 3-pin system fan connector
    ■ 1x Clear CMOS jumper
    ■ 1x Front panel audio connector
    ■ 2x System panel connectors
    ■ 1x TPM module connector
    ■ 1x Serial port connector
    ■ 1x Parallel port connector
    ■ 1x Chassis Intrusion connector
    I/O Controller
    ■ NUVOTON NCT6779 Controller Chip
    Hardware Monitor
    ■ CPU/System temperature detection
    ■ CPU/System fan speed detection
    ■ CPU/System fan speed control
    BIOS
    Features
    ■ 1x 64 Mb flash
    ■ UEFI AMI BIOS
    ■ ACPI 5.0, PnP 1.0a, SM BIOS 2.7, DMI 2.0
    Or a Supermicro X10SLM-F

    Code:
    Feature X10SLM-F 
    PCH  
    Intel C224 Exp 
    
    PCI-E Slot 
    1 PCI-E 3.0 x8 in x16,
    1 PCI-E 3.0 x8,
    1 PCI-E 2.0 x4 in x8
    
    DIMM Support
    Up to 32GB ECC UDIMM DDR3- 1600MHz
    
    LAN CTRL
     Dual GbE LAN Ports 1xi210AT, 1xi217LM
    
    COM 
    One COM Port
    One COM Header
    
    SATA 
    4 SATA 6.0Gbs
    2 SATA 3.0Gbs
    
    RAID    RAID 0/1/5/10 (See pages 2-37 and 2-38 for details)
    USB Support
    4 USB 3.0 Ports,
    6 USB 2.0 Ports
    
    IPMI
     IPMI 2.0 (X10SLM-F)
    Unless I'm missing something in the specs I'm preferring the supermicro board what are your thoughts. What is causing me to kinda pick ot over the MSI is the 2 PCI-E 3.0 slots versus one on the MSI.
    Thoughts & opinions please

    kept looking at the Supermicro and it's chipset looks like the i7 4790K isn't supported.
    But the the Xeon e3 1200 v3 and the i3 core fourth gen are. Which in my opinion is a performance hit.
    Last edited by sgt-mike; March 25th, 2024 at 03:13 AM.

  6. #6
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Need assistance and advice with NFS Server

    MSI & SuperMicro are both my go-to's.

    But on a budget (low-cost), looking at what you can salvage from your old system to re-use, and getting yourself a good foundation...

    Your choice of CPU I7-4790K is a good choice, and the integrated iGPU will save you money and slots.

    Ditch the case. This one is $75: https://www.ebay.com/itm/23481312999...hoCtXcQAvD_BwE
    That is a full-tower for $75 with plenty of room for storage and whatever you what to throw at it. Has plenty of room to grow.

    Motherboard for that CPU would be socket FCLGA1150:

    First Choice would be this SuperMicro:
    https://www.ebay.com/itm/39454802596...xoC5y8QAvD_BwE
    Spec's: https://www.supermicro.com/en/produc...erboard/x10sae

    Pro's has 8 SATA Ports...
    Con's has 2 PCI slots, which is hard to find useful cards for these days. The PCIe x1 slots are Gen 2.0.


    Second choice would be this MSI:
    https://www.ebay.com/itm/25566841065...RoCR_YQAvD_BwE
    Specs: https://www.msi.com/Motherboard/B85-.../Specification

    That was the best board I could find used for what you are wanting to do. Is ATX, and will fit that case.

    Con's. Has 3 PCI slots. the 2 Pcie x1 slots are gen 2.0

    *** Now if you went with 5th Gen Intel i7-5970K for $30:
    https://www.ebay.com/itm/16647090764...RoCclMQAvD_BwE

    Then thing open up with that socket LGA-2011-v3 motherboard:
    Also DDR3 but... supports up to 64GB RAM, and all the PCIe slots should all be gen 3 (mostly)....

    This is the best choice I found:
    SuperMicro:
    https://www.ebay.com/itm/29599874230...BoCpCgQAvD_BwE
    Spec's: https://www.thomas-krenn.com/en/wiki...-F_Motherboard
    Pros: 2x PCI-E 3.0 x8 (in x16) slots, 2x PCI-E 3.0 x8 (in x8) slots, 2x PCI-E 3.0 x4 (in x8) slots, 1x PCI-E 2.0 x4 (in x8) slot. 64 GB non-ECC RAM. 6x SATA3 ports, 2x SATA2 ports.
    Last edited by MAFoElffen; March 25th, 2024 at 03:21 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #7
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Need assistance and advice with NFS Server

    My recommended would be the case ($75), the 5th Gen Intel i7 ($30), and the last SuperMicro Board. Notice that board says $99.99 or Best Offer. Make him an offer. Doesn't hurt to try right?

    Then you could do what you wanted to do in your first post. Plenty of room for growth over time, without fighting anything to make it work... That would be a great foundation to build on top of for a NAS.

    Like i said, room for storage drives are your priority. Then PCIe and SATA bus generation speeds. That has the PCie slots to support any storage controllers and any network NICs you might need to support your network shares in the future.
    Last edited by MAFoElffen; March 25th, 2024 at 03:31 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  8. #8
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    32
    Distro
    Ubuntu

    Re: Need assistance and advice with NFS Server

    I really Like the idea of the 5th Gen i7 alot......
    While I have not read all the way through your last two post....
    I re-read where you wrote : could go on for pages... Even write your a book on it. Because you have not set any boundaries or limits of scope. Maybe you should limit your scope a bit to get this more focused and directed. "
    That prompted me to hurry up and post this.

    Needs for the NAS/NFS server???
    ... easy to back up data From a media Server... up load ISO's of install disk, Photos , and for me to learn on ...

    Actually It will Probably be heavier on the read side vs the write side.


    I know you lean towards the RaidZ2 vs the RaidZ.... but up till last night after I posted I really didn't know Exactly how the Zpool worked ... and I think I have just enough knowledge to be silly and dangerous haha. But the way I understand it I can form a zpool from a Raidz and a Raidz2 vdev (hopefully I stated that correctly)

    BTW
    I have earmarked the Items you highly advised to round up at the end of this month (payday -- Military disabled retiree) and will start rounding up the components.
    I Absolutely love this case you found ------"Ditch the case. This one is $75: https://www.ebay.com/itm/23481312999...hoCtXcQAvD_BwE
    That is a full-tower for $75 with plenty of room for storage and whatever you what to throw at it. Has plenty of room to grow."

    The price is great until I got to the shipping ---- WOW $103.00 ,,,, I've shipped machine guns way cheaper than that.
    I'll buzz / email the seller and see what's up with that. But then again I've seen cases for over $300 without PSU's not counting the shipping. (just sticker shock is all I expected about half that amount until I looked) so I just might need to "suck it up buttercup" and the more I mull it over and type it's getting way easier a pill to shallow (shipping cost).
    For a PSU I was thinking a modular 1000 to 1200W (corsair maybe??)
    I thank you from the bottom of my heart on your research and posting those items.
    Last edited by sgt-mike; March 25th, 2024 at 04:18 AM.

  9. #9
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Need assistance and advice with NFS Server

    With priority on reads for media (movies, photos, music)... SAS would be overkill. So would SSD or NVMe. But in the same sentence, you mentioned "backups to", then your priority changes to "network transfer speeds and writes". <--- Depends what types of backups (full, incremental or differential), and how large the files are.

    TrueNAS and other NAS systems really push ZFS. I challenge you to learn ZFS. I think it is really worthwhile to learn and use. It is fast and dependable. It is easy to use, as long as you learn a few commands to use it, and keep up with a few things.

    Read posts in this Forum from 1fallen and I on ZFS and see for yourself. Both of us help here with ZFS use and work behind the scenes to keep things going for it for Ubuntu. It uses a COW filesystem, which means Copy-On-Write (confirm the write, before committing it). It is a Volume Manager, sort of like LVM.

    What i like about it, is that, like LVM, I can make changes to it, or do maintenance, while the filesystem is Live. For RAIDZ, if you look at this on my server:
    Code:
    mafoelffen@Mikes-B460M:~$ sudo zpool status -v datapool
    [sudo] password for mafoelffen: 
      pool: datapool
     state: ONLINE
      scan: scrub repaired 0B in 00:21:04 with 0 errors on Sun Mar 10 00:45:05 2024
    config:
    
        NAME                                                   STATE     READ WRITE CKSUM
        datapool                                               ONLINE       0     0     0
          raidz2-0                                             ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA09560A-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA11601H-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TA47393M-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNS0W330507J-part1  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_2TB_S6PNNM0TB08933B-part1  ONLINE       0     0     0
        logs    
          nvme-Samsung_SSD_970_EVO_2TB_S464NB0KB10521K-part2   ONLINE       0     0     0
        cache
          nvme-Samsung_SSD_970_EVO_2TB_S464NB0KB10521K-part1   ONLINE       0     0     0
    
    errors: No known data errors
    That pool (datapool) is a 5x 2 TB SSD RAIDZ2 Array, with one 2 TB NVMe with a partition used as a hardware read cache (L2ARC), and another partition used as a hardware read cache (SLOG). They are SATA3, and the i/o speeds are benchmarked at
    Code:
    TEST_WRITE: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1): [W(1)][-.-%][eta 00m:00s]                          
    Jobs: 1 (f=1): [W(1)][-.-%][eta 00m:00s] 
    TEST_WRITE: (groupid=0, jobs=1): err= 0: pid=861190: Tue Feb 13 07:54:18 2024
      write: IOPS=872, BW=873MiB/s (915MB/s)(10.0GiB/11736msec); 0 zone resets
        slat (usec): min=121, max=1734, avg=407.70, stdev=273.42
        clat (nsec): min=1198, max=7534.3M, avg=35377576.92, stdev=412853630.65
         lat (usec): min=135, max=7535.0k, avg=35785.57, stdev=412876.22
        clat percentiles (msec):
         |  1.00th=[    5],  5.00th=[    5], 10.00th=[    6], 20.00th=[    6],
         | 30.00th=[    6], 40.00th=[    8], 50.00th=[   11], 60.00th=[   15],
         | 70.00th=[   16], 80.00th=[   18], 90.00th=[   21], 95.00th=[   31],
         | 99.00th=[   46], 99.50th=[   49], 99.90th=[ 7550], 99.95th=[ 7550],
         | 99.99th=[ 7550]
       bw (  MiB/s): min=  548, max= 4800, per=100.00%, avg=2215.33, stdev=1465.65, samples=9
       iops        : min=  548, max= 4800, avg=2215.33, stdev=1465.65, samples=9
      lat (usec)   : 2=0.02%, 4=0.03%, 250=0.03%, 500=0.04%, 750=0.07%
      lat (usec)   : 1000=0.03%
      lat (msec)   : 2=0.21%, 4=0.41%, 10=45.43%, 20=40.48%, 50=12.95%
      lat (msec)   : >=2000=0.30%
      fsync/fdatasync/sync_file_range:
        sync (nsec): min=446, max=446, avg=446.00, stdev= 0.00
        sync percentiles (nsec):
         |  1.00th=[  446],  5.00th=[  446], 10.00th=[  446], 20.00th=[  446],
         | 30.00th=[  446], 40.00th=[  446], 50.00th=[  446], 60.00th=[  446],
         | 70.00th=[  446], 80.00th=[  446], 90.00th=[  446], 95.00th=[  446],
         | 99.00th=[  446], 99.50th=[  446], 99.90th=[  446], 99.95th=[  446],
         | 99.99th=[  446]
      cpu          : usr=3.69%, sys=29.50%, ctx=13527, majf=0, minf=15
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
      WRITE: bw=873MiB/s (915MB/s), 873MiB/s-873MiB/s (915MB/s-915MB/s), io=10.0GiB (10.7GB), run=11736-11736msec
    TEST_READ: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
    fio-3.28
    Starting 1 process
    Jobs: 1 (f=1)
    TEST_READ: (groupid=0, jobs=1): err= 0: pid=861236: Tue Feb 13 07:54:20 2024
      read: IOPS=8006, BW=8006MiB/s (8395MB/s)(10.0GiB/1279msec)
        slat (usec): min=77, max=511, avg=123.68, stdev=39.18
        clat (nsec): min=1240, max=9849.2k, avg=3827351.26, stdev=1161532.91
         lat (usec): min=111, max=10188, avg=3951.21, stdev=1197.20
        clat percentiles (usec):
         |  1.00th=[ 2212],  5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425],
         | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3490],
         | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 4686], 95.00th=[ 7308],
         | 99.00th=[ 7635], 99.50th=[ 7832], 99.90th=[ 8225], 99.95th=[ 8848],
         | 99.99th=[ 9634]
       bw (  MiB/s): min= 6522, max= 8928, per=96.49%, avg=7725.00, stdev=1701.30, samples=2
       iops        : min= 6522, max= 8928, avg=7725.00, stdev=1701.30, samples=2
      lat (usec)   : 2=0.04%, 4=0.01%, 250=0.10%, 500=0.10%, 750=0.10%
      lat (usec)   : 1000=0.14%
      lat (msec)   : 2=0.43%, 4=87.92%, 10=11.17%
      cpu          : usr=0.23%, sys=99.37%, ctx=1, majf=0, minf=8204
      IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
         issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=32
    
    Run status group 0 (all jobs):
       READ: bw=8006MiB/s (8395MB/s), 8006MiB/s-8006MiB/s (8395MB/s-8395MB/s), io=10.0GiB (10.7GB), run=1279-1279msec
    I know: "Show me the data!" --> "Just the facts mame..."

    Like LVM, you can always add vdevs to a pool (without any RAID) to increase your storage space. But-- Putting disks into striped arrays not only ensures some protection for data integrity, but also improves the I/O performance greatly.

    If you search my username on "zfs raidz performance" you'll find a thread were we tuned someones PCIe gen2 with RAIDZ3 to be able to do local network backups at around 550-575 MBs sustained writes on very large files... (Faster on smaller files.)
    Last edited by MAFoElffen; March 25th, 2024 at 04:33 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  10. #10
    Join Date
    Mar 2024
    Location
    Central Region U.S.A
    Beans
    32
    Distro
    Ubuntu

    Re: Need assistance and advice with NFS Server

    Yes, very interested in the open zfs,RAIDZ and Z2 and really I don't know enough to actually ask good questions (yet).

    But from what little I have read thus far it looks very promising.

    Back to the Server purpose outside the ISO's for installing operating systems or drive images of say my 1TB laptop or the wife's 500gb laptop, the largest files will probably be in the 1 to 4Gib range.(media data) for write on the NAS/NFS/Backup Server.

    The reason I mentioned the SAS and SATA Enterprise class (2.5") configurations is because what I have seen in the spinning disk SAS or SATA the drive height / Thickness is about 19 mm or so . Now that would drive the Cage configuration to a 4 drive in a 5.25" external bay.

    But Like you stated on the SSD's Enterprise class whether SAS / SATA that is a skinny 7mm height which would allow either a 6 drive or 8 drive in each 5.25" External bay. I like the 8 drive best as it would allow 6 members and 2 spares for that cage to be in that array.

    NVMe Ive seen those cage at 8 drives per cage and yep they are just stupid fast.

    On the large 3.5" disk (over 8TB )it gets really frustrating at the advertised vs the actual capacity, I know why that is but still when you lose a TB or more per drive before it get's into an array . But that is the fact of life I guess. Price point wise they might be cheaper IDK just have not applied math to if for cost saving.
    Last edited by sgt-mike; March 25th, 2024 at 08:28 AM.

Page 1 of 3 123 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •