HC1/HC2 NAS

Post Reply
NewbieHC1HC2
Posts: 5
Joined: Thu Aug 02, 2018 4:57 pm
languages_spoken: english
ODROIDs: Buying
Contact:

HC1/HC2 NAS

Unread post by NewbieHC1HC2 » Mon Aug 06, 2018 6:50 am

each unit has 1 ssd installed.
My question is I need qty 4 drives in my NAS.
1) Do I require 4 HC1/HC2 units? 1 per SSD?
In either case (2) what determines them to show as a single NAS unit on the network and be controlled as one unit setting up and managing sharing/access/etc?
(3) Do I need to purchase the "STACK" TO HAVE THAT SINGLE CONTROL/ single NAS look/action?

I WILL READ UP ON THE STACK MORE THIS EVENING. BUT STILL NEED CLARIFICATION FROM HARDKERNEL PLEASE.
If so, (4) would the controlling HC [unit 5] share it's drive in the NAS or maintain separately? or optional to add it to NAS as well?

Can I RUN RAID? WRITE TO ONE AND SPECIFY COPY TO A SECOND DRIVE. (WRITING TO TWO OF FOUR DRIVES ONLY WITH COPY TO TWO DRIVES)

I WILL BE PURCHASING ANOTHER SEPARATE SBC/OS UNDETERMINED AS A WORKSTATION TO KEEP ON SIDE TABLE/ACCESS FROM MY DOWN STAIRS CHAIR.
SO POSSIBLY 6 SBCs PURCHASE HERE.

tHANK YOU IN ADVANCE FOR YOUR VALUED TIME AND SHARED KNOWLEDGE WITH A NEWBIE!

User avatar
memeka
Posts: 4144
Joined: Mon May 20, 2013 10:22 am
languages_spoken: english
ODROIDs: XU rev2 + eMMC + UART
U3 + eMMC + IO Shield + UART
Contact:

Re: HC1/HC2 NAS

Unread post by memeka » Mon Aug 06, 2018 7:35 am

No, you can't have 4 drives like a NAS, and you don't have "single control" with RAID.
But, you can run GlusterFS on multiple units (or a stack). GlusterFS can give you some of the features you want.
You can read about it in some articles here:
https://magazine.odroid.com/article/200 ... lications/
https://magazine.odroid.com/article/exp ... ver-setup/
https://magazine.odroid.com/article/exp ... rformance/

dchang0
Posts: 123
Joined: Tue Dec 22, 2015 1:29 pm
languages_spoken: english
ODROIDs: C1+, XU4Q
Contact:

Re: HC1/HC2 NAS

Unread post by dchang0 » Tue Dec 11, 2018 10:41 am

FYI, to anyone considering the GlusterFS approach, I followed the article memeka linked to (excellent write-up) and found out that to avoid split-brain, it takes "replicate 3" or "replicate 3 arbiter 1" so that one node acts as a tie-breaker. This is not mentioned in the write-up.

Practically, this means that I either need to run three HC2, each with the same size HDD (three full copies of the data), or two HC2 (two full copies of the data) plus an SBC with much smaller storage to serve as a metadata cache.

I am going to go with a low-cost Raspberry Pi Zero W (see UPDATE below) with only 64GB of storage to serve as a the arbiter node. It might be too slow for the job, in which case I would use an XU4 with eMMC card, also 64GB.

I like the GlusterFS on multiple HC2 units approach so far, mainly because it is fanless, ultra-low power consumption, and can scale much more for terabytes per dollar.

UPDATE: The Raspberry Pi Zero W is arm6l, and the official GlusterFS packages are built for arm64. Thus, I would have to build GlusterFS 5.1 from source code. It is probably a better idea to go with the XU4, for which GlusterFS 5.1's official packages definitely work.

UPDATE 2: I ended up building GlusterFS 5.1 from source tarball on the Raspberry Pi Zero W. After building it successfully, I ran into a bug that was fixed by the GlusterFS devs here:

https://bugzilla.redhat.com/show_bug.cgi?id=1649054

It now works. I have yet to add this third arbiter node to my cluster (must back up the data first in case things go badly), but that should go well.

UPDATE 3: adding the RPi Zero W as an arbiter node went well, but I found out that the write speed of the cluster is affected by the weakest link, which is of course the RPi Zero W. Also, if the arbiter node runs out of free inodes before the larger nodes, the whole cluster will stop writes.

So it may be better after all to use an XU4 or HC2 as the arbiter to keep up with the other nodes.

Post Reply

Return to “General Topics”

Who is online

Users browsing this forum: No registered users and 2 guests