Re: Sold Out

canoodle
Posts: 4
Joined: Wed Dec 20, 2017 9:04 am
languages_spoken: english
ODROIDs: 2 x HC1, 1x XU4 serves as NES and Sega Emulator X-D
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by canoodle » Mon Mar 25, 2019 6:06 am

Hello,

are there any news on when it will be available?

i would desperately need such a energy saving x86 thing with dual LAN, to build an opnsense firewall.

otherwise i have to look for other options. :o

any recommendations?

(right now it's running on a dell poweredge, with around 100Watt consumption)

Twinstar1337
Posts: 1
Joined: Mon Mar 25, 2019 11:04 pm
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by Twinstar1337 » Mon Mar 25, 2019 11:11 pm

Hey there,
can you let us pre Order the Odroid N2 please?
I want it and i missed the launch!
best regards

User avatar
mad_ady
Posts: 5676
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 18 times
Been thanked: 18 times
Contact:

Re: Re: Sold Out

Unread post by mad_ady » Tue Mar 26, 2019 12:35 am

N2 will launch soon - probably this or next week.

User avatar
m0thman
Posts: 6
Joined: Mon Sep 14, 2015 10:54 pm
languages_spoken: english, russian
ODROIDs: XU4 + CloudShell
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by m0thman » Wed Mar 27, 2019 5:03 pm

odroid wrote:
Thu Feb 28, 2019 4:29 pm
I'm about posting the meeting result.

Intel focuses on only Core i7 and i5 series in Q1 2019 to make more money.
They just told us there must be a chance to make some Celeron/Gemini Lake processors in Q2 April or May.
So we have to wait 2 or 3 more months to resume the H2 production.
Sorry for the bad news. :(
Is that possible to subscribe somewhere to receive notification when it will be available for purchase?

User avatar
mad_ady
Posts: 5676
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 18 times
Been thanked: 18 times
Contact:

Re: Re: Sold Out

Unread post by mad_ady » Wed Mar 27, 2019 5:24 pm

It's now available for purchase. Head on to the store.

User avatar
odroid
Site Admin
Posts: 30282
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 3 times
Been thanked: 26 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Wed Mar 27, 2019 5:26 pm

It's N2 not H2.
Still there is no firm schedule from Intel for H2 production. :(
These users thanked the author odroid for the post:
mxc4 (Wed Apr 17, 2019 11:20 pm)

User avatar
mad_ady
Posts: 5676
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 18 times
Been thanked: 18 times
Contact:

Re: Sold Out

Unread post by mad_ady » Wed Mar 27, 2019 7:54 pm

My bad, sorry for the confusion... N and H are too similar.

tmihai20
Posts: 152
Joined: Mon Nov 07, 2016 10:56 pm
languages_spoken: english, french, italian, romanian
ODROIDs: XU4, Go
Location: Romania
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by tmihai20 » Wed Mar 27, 2019 11:41 pm

I would have placed an order this morning when I saw the news on Facebook, I have wishlisted pretty much all H2 components I need. I am not so lucky. I honestly think we will be able to order H2 in September this year the earliest (just a personal wild guess).
Riddle me this, riddle me that
Who is afraid of the big, black bat?
I write (in Romanian mostly) on a blog (see my profile)

wings
Posts: 4
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Mon Apr 08, 2019 11:26 am

jit-010101 wrote:
Mon Mar 04, 2019 7:24 pm
things like distributed storage e.g. 4 of them mounted on an single ATX plate ... e.g. with MooseFS ...
I've got a MooseFS cluster running on 4 ODROID HC2s. It works incredibly well.

User avatar
odroid
Site Admin
Posts: 30282
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 3 times
Been thanked: 26 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Mon Apr 08, 2019 11:51 am

wings wrote:
Mon Apr 08, 2019 11:26 am
jit-010101 wrote:
Mon Mar 04, 2019 7:24 pm
things like distributed storage e.g. 4 of them mounted on an single ATX plate ... e.g. with MooseFS ...
I've got a MooseFS cluster running on 4 ODROID HC2s. It works incredibly well.
I've not heard about the MooseFS storage cluster but It looks really great.
Can you please tell us more about it? If possible, please open a new topic on a right sub-forum.

wings
Posts: 4
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Mon Apr 08, 2019 2:19 pm

odroid wrote:
Mon Apr 08, 2019 11:51 am
I've not heard about the MooseFS storage cluster but It looks really great.
Can you please tell us more about it? If possible, please open a new topic on a right sub-forum.
Sure. I call it my Elastic NAS.

I'm using the free/open source Community Edition, so my setup looks like this:

- blinky (master + chunkserver)
- pinky (chunkserver + metalogger)
- inky (chunkserver + metalogger)
- clyde (chunkserver + metalogger)

The master server handles metadata operations for the cluster, and directs clients as to where the data is being read from/written to within the storage cluster. (In other words, metadata ops go through the master, but all other operations are direct).

The chunkservers each have a 4TB WD RED hard drive, for a total of 16TB of raw storage. Most of my files are set to "copies=2", meaning they should exist on at least two independent chunkservers. This means I can lose any of the 4 and not lose data. Actually, due to the self-healing nature of MooseFS, I can actually lose 3 of the 4 and not lose data as long as there is sufficient time between failures for the cluster to self-heal and rebalance.

I've got SAMBA running on Blinky which mounts the filesystem and provides file shares for movies, music, documents etc. From the "client" perspective (and from SAMBA's perspective), it's just one big NAS. I also have a mirroring script on Blinky that ensures my media collection is up-to-date.

Performance-wise, read and write speeds are around 90-100MB/s (enough to almost saturate gigabit) and I can handle multiple 4K streams from the Elastic NAS.

I've got a series of blog posts on the technicals which I'm getting ready to publish.

I wouldn't mind a few HC1s to test with this kind of setup at some point ;)

User avatar
odroid
Site Admin
Posts: 30282
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 3 times
Been thanked: 26 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Mon Apr 08, 2019 2:26 pm

Thank you for the explanation. The Elastic NAS must be a reliable and scalable storage solution. :D
Once you are ready, please consider writing an article for our Magazine.

jl_678
Posts: 1
Joined: Sat Apr 13, 2019 3:49 am
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by jl_678 » Sat Apr 13, 2019 3:58 am

This Moosefs thing looks very cool! I am considering doing the same thing to replace an aging NAS. Can you answer a couple of simple questions about it?

1. Give the setting of 2, how much usable capacity do you have?
2. Do you need a full 4TB in the master server? It looks like it only holds Metadata and so might need significantly less.
3. I like the self healing idea; however, the master server seems to be a single point of failure. How hard would it be to recover that if it died?
4. I am looking forward to your technical blog and so definitely share it

If I go down this route, I can document the process if you did not do that.

Thank you.

Sent from my SM-T820 using Tapatalk


wings
Posts: 4
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by wings » Mon Apr 15, 2019 5:06 pm

jl_678 wrote:
Sat Apr 13, 2019 3:58 am
This Moosefs thing looks very cool! I am considering doing the same thing to replace an aging NAS. Can you answer a couple of simple questions about it?
Sure, I'd love to!
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
1. Give the setting of 2, how much usable capacity do you have?
I've got 12TB of disks "in service" (as technically my fourth node does not yet have a hard drive. I'll be purchasing one this weekend, bringing it up to 16TB of raw capacity).

Of that, I have about 11TiB of storage after overheads and conversion from TB to TiB etc. So 11TiB represents the "real" storage capacity, if that makes sense.

With a goal setting of 2, I could store roughly half of that, so 5.5TiB.

One of the really cool things about MooseFS, however, is that you can set goals on a folder-by-folder or file-by-file basis. This means you can do things like have the default copies setting be 2 (as in my cluster) but set special folders to have more or less copies. My ISO images folder is set to 1 copy as I don't care if I lose it, but my partner's music collection is set to 3 copies for added durability at the cost of a bit more disk usage.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
2. Do you need a full 4TB in the master server? It looks like it only holds Metadata and so might need significantly less.
The only reason the master server has a 4TB disk is because it also runs a chunkserver. The moosefs-master service and a moosefs-chunkserver service both live on it.
In a similar vein, the remaining servers run a moosefs-chunkserver service and *also* run a moosefs-metalogger service.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
3. I like the self healing idea; however, the master server seems to be a single point of failure. How hard would it be to recover that if it died?
Yes, the master is a single point of failure. However, the metaloggers running on the other nodes are designed to "follow" the master and each keeps a copy of the metadata set and tries to keep it up to date.

In the event of the master drastically failing, you can "promote" a metalogger to a master using its copy of the metadata (either by changing which IP the clients point to, or by moving the IP to the metalogger you are promoting).

In practice, recovering from a failed master is actually very easy using that method.

For what it's worth, maintenance in general is a breeze - when I add the fourth and final hard drive I'll be able to put it into service without taking down or interrupting MooseFS in any way. It's completely transparent most of the time.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
4. I am looking forward to your technical blog and so definitely share it
Thanks! I'm hoping to publish it in the next week or so but have been very busy.

Definitely let me know how you get on if you try it before my posts hit. If you want to talk more about Moose you can PM me on here, hit me on Twitter (@gnomethrower) or email me.

elatllat
Posts: 1226
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1
Has thanked: 0
Been thanked: 2 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Wed Apr 17, 2019 11:08 am

wings wrote:
Mon Apr 08, 2019 2:19 pm
...master...
Any reason you opted to not use a multi-master system like Ceph?

wings
Posts: 4
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Wed Apr 17, 2019 9:23 pm

elatllat wrote:
Wed Apr 17, 2019 11:08 am
Any reason you opted to not use a multi-master system like Ceph?
I have less experience with Ceph, it's significantly more complicated and I've had trouble getting it up and running in the past over 3 or 4 attempts. I did recently have success with it and it's getting significantly easier to use, but it wasn't appropriate for this particular project.

That being said, it's funny you mention that... My next project is a 3-node Ceph cluster using ODroid HC1s with SSDs.

elatllat
Posts: 1226
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1
Has thanked: 0
Been thanked: 2 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Thu Apr 18, 2019 11:31 pm

Shamelessly thread hijacking...
wings wrote:
Wed Apr 17, 2019 9:23 pm
...My next project is a 3-node Ceph cluster...
AFAIK Ceph and Lustre are the only FSs that offer transparent high availability and scalability (auto sharding) but both seem more memory heavy than strictly required (normal file systems let the indexes live on SSD without any latency problems)

Post Reply

Return to “General Topics”

Who is online

Users browsing this forum: No registered users and 1 guest