Re: Sold Out

canoodle
Posts: 4
Joined: Wed Dec 20, 2017 9:04 am
languages_spoken: english
ODROIDs: 2 x HC1, 1x XU4 serves as NES and Sega Emulator X-D
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by canoodle » Mon Mar 25, 2019 6:06 am

Hello,

are there any news on when it will be available?

i would desperately need such a energy saving x86 thing with dual LAN, to build an opnsense firewall.

otherwise i have to look for other options. :o

any recommendations?

(right now it's running on a dell poweredge, with around 100Watt consumption)

Twinstar1337
Posts: 1
Joined: Mon Mar 25, 2019 11:04 pm
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by Twinstar1337 » Mon Mar 25, 2019 11:11 pm

Hey there,
can you let us pre Order the Odroid N2 please?
I want it and i missed the launch!
best regards

User avatar
mad_ady
Posts: 6384
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 146 times
Been thanked: 107 times
Contact:

Re: Re: Sold Out

Unread post by mad_ady » Tue Mar 26, 2019 12:35 am

N2 will launch soon - probably this or next week.

User avatar
m0thman
Posts: 13
Joined: Mon Sep 14, 2015 10:54 pm
languages_spoken: english, russian
ODROIDs: XU4 + CloudShell
Has thanked: 1 time
Been thanked: 1 time
Contact:

Re: Re: Sold Out

Unread post by m0thman » Wed Mar 27, 2019 5:03 pm

odroid wrote:
Thu Feb 28, 2019 4:29 pm
I'm about posting the meeting result.

Intel focuses on only Core i7 and i5 series in Q1 2019 to make more money.
They just told us there must be a chance to make some Celeron/Gemini Lake processors in Q2 April or May.
So we have to wait 2 or 3 more months to resume the H2 production.
Sorry for the bad news. :(
Is that possible to subscribe somewhere to receive notification when it will be available for purchase?

User avatar
mad_ady
Posts: 6384
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 146 times
Been thanked: 107 times
Contact:

Re: Re: Sold Out

Unread post by mad_ady » Wed Mar 27, 2019 5:24 pm

It's now available for purchase. Head on to the store.

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Wed Mar 27, 2019 5:26 pm

It's N2 not H2.
Still there is no firm schedule from Intel for H2 production. :(
These users thanked the author odroid for the post:
mxc4 (Wed Apr 17, 2019 11:20 pm)

User avatar
mad_ady
Posts: 6384
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 146 times
Been thanked: 107 times
Contact:

Re: Sold Out

Unread post by mad_ady » Wed Mar 27, 2019 7:54 pm

My bad, sorry for the confusion... N and H are too similar.

tmihai20
Posts: 198
Joined: Mon Nov 07, 2016 10:56 pm
languages_spoken: english, french, italian, romanian
ODROIDs: XU4, GO, H2
Location: Romania
Has thanked: 20 times
Been thanked: 3 times
Contact:

Re: Re: Sold Out

Unread post by tmihai20 » Wed Mar 27, 2019 11:41 pm

I would have placed an order this morning when I saw the news on Facebook, I have wishlisted pretty much all H2 components I need. I am not so lucky. I honestly think we will be able to order H2 in September this year the earliest (just a personal wild guess).
Riddle me this, riddle me that
Who is afraid of the big, black bat?
I write (in Romanian mostly) on a blog (see my profile)

wings
Posts: 6
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 2 times
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Mon Apr 08, 2019 11:26 am

jit-010101 wrote:
Mon Mar 04, 2019 7:24 pm
things like distributed storage e.g. 4 of them mounted on an single ATX plate ... e.g. with MooseFS ...
I've got a MooseFS cluster running on 4 ODROID HC2s. It works incredibly well.

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Mon Apr 08, 2019 11:51 am

wings wrote:
Mon Apr 08, 2019 11:26 am
jit-010101 wrote:
Mon Mar 04, 2019 7:24 pm
things like distributed storage e.g. 4 of them mounted on an single ATX plate ... e.g. with MooseFS ...
I've got a MooseFS cluster running on 4 ODROID HC2s. It works incredibly well.
I've not heard about the MooseFS storage cluster but It looks really great.
Can you please tell us more about it? If possible, please open a new topic on a right sub-forum.

wings
Posts: 6
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 2 times
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Mon Apr 08, 2019 2:19 pm

odroid wrote:
Mon Apr 08, 2019 11:51 am
I've not heard about the MooseFS storage cluster but It looks really great.
Can you please tell us more about it? If possible, please open a new topic on a right sub-forum.
Sure. I call it my Elastic NAS.

I'm using the free/open source Community Edition, so my setup looks like this:

- blinky (master + chunkserver)
- pinky (chunkserver + metalogger)
- inky (chunkserver + metalogger)
- clyde (chunkserver + metalogger)

The master server handles metadata operations for the cluster, and directs clients as to where the data is being read from/written to within the storage cluster. (In other words, metadata ops go through the master, but all other operations are direct).

The chunkservers each have a 4TB WD RED hard drive, for a total of 16TB of raw storage. Most of my files are set to "copies=2", meaning they should exist on at least two independent chunkservers. This means I can lose any of the 4 and not lose data. Actually, due to the self-healing nature of MooseFS, I can actually lose 3 of the 4 and not lose data as long as there is sufficient time between failures for the cluster to self-heal and rebalance.

I've got SAMBA running on Blinky which mounts the filesystem and provides file shares for movies, music, documents etc. From the "client" perspective (and from SAMBA's perspective), it's just one big NAS. I also have a mirroring script on Blinky that ensures my media collection is up-to-date.

Performance-wise, read and write speeds are around 90-100MB/s (enough to almost saturate gigabit) and I can handle multiple 4K streams from the Elastic NAS.

I've got a series of blog posts on the technicals which I'm getting ready to publish.

I wouldn't mind a few HC1s to test with this kind of setup at some point ;)

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Mon Apr 08, 2019 2:26 pm

Thank you for the explanation. The Elastic NAS must be a reliable and scalable storage solution. :D
Once you are ready, please consider writing an article for our Magazine.

jl_678
Posts: 2
Joined: Sat Apr 13, 2019 3:49 am
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by jl_678 » Sat Apr 13, 2019 3:58 am

This Moosefs thing looks very cool! I am considering doing the same thing to replace an aging NAS. Can you answer a couple of simple questions about it?

1. Give the setting of 2, how much usable capacity do you have?
2. Do you need a full 4TB in the master server? It looks like it only holds Metadata and so might need significantly less.
3. I like the self healing idea; however, the master server seems to be a single point of failure. How hard would it be to recover that if it died?
4. I am looking forward to your technical blog and so definitely share it

If I go down this route, I can document the process if you did not do that.

Thank you.

Sent from my SM-T820 using Tapatalk


wings
Posts: 6
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 2 times
Been thanked: 0
Contact:

Re: Sold Out

Unread post by wings » Mon Apr 15, 2019 5:06 pm

jl_678 wrote:
Sat Apr 13, 2019 3:58 am
This Moosefs thing looks very cool! I am considering doing the same thing to replace an aging NAS. Can you answer a couple of simple questions about it?
Sure, I'd love to!
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
1. Give the setting of 2, how much usable capacity do you have?
I've got 12TB of disks "in service" (as technically my fourth node does not yet have a hard drive. I'll be purchasing one this weekend, bringing it up to 16TB of raw capacity).

Of that, I have about 11TiB of storage after overheads and conversion from TB to TiB etc. So 11TiB represents the "real" storage capacity, if that makes sense.

With a goal setting of 2, I could store roughly half of that, so 5.5TiB.

One of the really cool things about MooseFS, however, is that you can set goals on a folder-by-folder or file-by-file basis. This means you can do things like have the default copies setting be 2 (as in my cluster) but set special folders to have more or less copies. My ISO images folder is set to 1 copy as I don't care if I lose it, but my partner's music collection is set to 3 copies for added durability at the cost of a bit more disk usage.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
2. Do you need a full 4TB in the master server? It looks like it only holds Metadata and so might need significantly less.
The only reason the master server has a 4TB disk is because it also runs a chunkserver. The moosefs-master service and a moosefs-chunkserver service both live on it.
In a similar vein, the remaining servers run a moosefs-chunkserver service and *also* run a moosefs-metalogger service.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
3. I like the self healing idea; however, the master server seems to be a single point of failure. How hard would it be to recover that if it died?
Yes, the master is a single point of failure. However, the metaloggers running on the other nodes are designed to "follow" the master and each keeps a copy of the metadata set and tries to keep it up to date.

In the event of the master drastically failing, you can "promote" a metalogger to a master using its copy of the metadata (either by changing which IP the clients point to, or by moving the IP to the metalogger you are promoting).

In practice, recovering from a failed master is actually very easy using that method.

For what it's worth, maintenance in general is a breeze - when I add the fourth and final hard drive I'll be able to put it into service without taking down or interrupting MooseFS in any way. It's completely transparent most of the time.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
4. I am looking forward to your technical blog and so definitely share it
Thanks! I'm hoping to publish it in the next week or so but have been very busy.

Definitely let me know how you get on if you try it before my posts hit. If you want to talk more about Moose you can PM me on here, hit me on Twitter (@gnomethrower) or email me.

elatllat
Posts: 1437
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 10 times
Been thanked: 28 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Wed Apr 17, 2019 11:08 am

wings wrote:
Mon Apr 08, 2019 2:19 pm
...master...
Any reason you opted to not use a multi-master system like Ceph?

wings
Posts: 6
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 2 times
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Wed Apr 17, 2019 9:23 pm

elatllat wrote:
Wed Apr 17, 2019 11:08 am
Any reason you opted to not use a multi-master system like Ceph?
I have less experience with Ceph, it's significantly more complicated and I've had trouble getting it up and running in the past over 3 or 4 attempts. I did recently have success with it and it's getting significantly easier to use, but it wasn't appropriate for this particular project.

That being said, it's funny you mention that... My next project is a 3-node Ceph cluster using ODroid HC1s with SSDs.

elatllat
Posts: 1437
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 10 times
Been thanked: 28 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Thu Apr 18, 2019 11:31 pm

Shamelessly thread hijacking...
wings wrote:
Wed Apr 17, 2019 9:23 pm
...My next project is a 3-node Ceph cluster...
AFAIK Ceph and Lustre are the only FSs that offer transparent high availability and scalability (auto sharding) but both seem more memory heavy than strictly required (normal file systems let the indexes live on SSD without any latency problems)

jl_678
Posts: 2
Joined: Sat Apr 13, 2019 3:49 am
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by jl_678 » Wed Apr 24, 2019 5:16 am

@wings I am moving forward with MooseFS and had another question for you. Your chunk servers run separate processes that consume space. (The 1 master server process and three metaloggers.) In all those cases, do you carve out space on the 4TB disk for these additional uses or do you use storage from another source like an SDCard? I am trying to understand if you can use the the HDD for anything other than MooseFS chunks on a chunkserver.

User avatar
roots
Posts: 17
Joined: Thu Feb 28, 2019 4:16 pm
languages_spoken: English, Romanian
ODROIDs: XU4
Location: Romania
Has thanked: 4 times
Been thanked: 1 time
Contact:

Re: Re: Sold Out

Unread post by roots » Wed Apr 24, 2019 3:50 pm

@odroid
Are there some updates about the H2? :(

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Wed Apr 24, 2019 4:50 pm

We already prepared raw materials for the next batch production excluding CPU a few months ago.
But there is no firm schedule from Intel yet.
So I can't tell you how many months we have to wait more. Sorry about that.
These users thanked the author odroid for the post:
tmihai20 (Thu Apr 25, 2019 6:01 am)

wings
Posts: 6
Joined: Mon Apr 08, 2019 11:24 am
languages_spoken: english
ODROIDs: 4x HC2
Has thanked: 2 times
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by wings » Sat Apr 27, 2019 1:54 pm

jl_678 wrote:
Wed Apr 24, 2019 5:16 am
@wings I am moving forward with MooseFS and had another question for you. Your chunk servers run separate processes that consume space. (The 1 master server process and three metaloggers.) In all those cases, do you carve out space on the 4TB disk for these additional uses or do you use storage from another source like an SDCard? I am trying to understand if you can use the the HDD for anything other than MooseFS chunks on a chunkserver.
The root filesystem of each node is stored on the sdcard, so by extension the metadata from the masters/metaloggers also lives on the SDcard. The 4TB disk is used purely for storing chunk data/chunks. I've got an XFS volume laid down straight on the disk, it's not even partitioned. That is then automounted on boot just before the chunkserver comes up.

(Note that the master stores the full metadata set in RAM so your sdcard doesn't need to be particularly fast)

nsummy
Posts: 4
Joined: Thu Feb 07, 2019 2:33 am
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by nsummy » Fri May 03, 2019 2:16 am

For anyone interested you can get the asrock 4105m for $75 on Newegg. Obviously not as small as the H2, but if you are set on Gemini Lake its probably the best option for the near future. I have a feeling this shortage will drag on past the summer: https://www.newegg.com/Product/Product. ... gnorebbr=1

stmicro
Posts: 249
Joined: Tue Apr 28, 2015 4:23 pm
languages_spoken: english, chinese
ODROIDs: Many Odroids and Rpis.
Location: shenzhen china
Has thanked: 0
Been thanked: 1 time
Contact:

Re: Re: Sold Out

Unread post by stmicro » Fri May 03, 2019 10:14 am

nsummy wrote:
Fri May 03, 2019 2:16 am
For anyone interested you can get the asrock 4105m for $75 on Newegg. Obviously not as small as the H2, but if you are set on Gemini Lake its probably the best option for the near future. I have a feeling this shortage will drag on past the summer: https://www.newegg.com/Product/Product. ... gnorebbr=1
Thanks for the link. The price is quite good.
But there are something weak.
it has no NVME slot. :(
It has no dual NIC. :(
I have to buy a big ATX power block. :(
I'm missing the H2 :(

fvolk
Posts: 284
Joined: Sun Jun 05, 2016 11:04 pm
languages_spoken: english
ODROIDs: C2, HC1, H2
Has thanked: 0
Been thanked: 9 times
Contact:

Re: Sold Out

Unread post by fvolk » Fri May 03, 2019 5:47 pm

If the latest Intel roadmap leak is to be believed ;-), e.g. see
https://img.purch.com/intel-mobile-road ... xlLnBuZw==
Q2/19 ends Gemini Lake and Gemini Lake Refresh processors are the replacement...

domih
Posts: 102
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, H2.
Has thanked: 31 times
Been thanked: 21 times
Contact:

Re: Re: Sold Out

Unread post by domih » Tue May 07, 2019 9:12 am

stmicro wrote:
Fri May 03, 2019 10:14 am
nsummy wrote:
Fri May 03, 2019 2:16 am
For anyone interested you can get the asrock 4105m for $75 on Newegg. Obviously not as small as the H2, but if you are set on Gemini Lake its probably the best option for the near future. I have a feeling this shortage will drag on past the summer: https://www.newegg.com/Product/Product. ... gnorebbr=1
Thanks for the link. The price is quite good.
But there are something weak.
it has no NVME slot. :(
It has no dual NIC. :(
I have to buy a big ATX power block. :(
I'm missing the H2 :(
You do not have to go with a big ATX power block. Google picoPSU. For cases, Google Morex 557 or MITXPC Compact Mini ITX (both with or without picoPSU) or
Antec ISK110 etc.

A good (better...) alternative to the ASRock J4105M is the Gigabyte GB-BLCE-4105. IMHO, the ASRock J4105M plus accessories ends up a more expensive solution than the GB-BLCE-4105. But then again the ASRock brings PCIe ports and the Gigabyte does not, the latter in exchange does have the PCIe nvme SSD 2280.

Anyway, this is while waiting for the availability of the Odroid H2, which IMHO (again) is a better offer because you get 2xSATA, 2x1Gbe Ethernet, PCIe nvme SSD 2280 plus GPIO expansion.

Sergey3
Posts: 2
Joined: Tue May 07, 2019 8:24 pm
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by Sergey3 » Wed May 08, 2019 5:10 pm

domih wrote:
Tue May 07, 2019 9:12 am
A good (better...) alternative to the ASRock J4105M is the Gigabyte GB-BLCE-4105. IMHO, the ASRock J4105M plus accessories ends up a more expensive solution than the GB-BLCE-4105. But then again the ASRock brings PCIe ports and the Gigabyte does not, the latter in exchange does have the PCIe nvme SSD 2280.
GA-SBCAP3350 https://www.gigabyte.com/Motherboard/GA ... -rev-10#ov
Built-in Intel® Celeron™ N3350 (up to 2.4 GHz) dual-core processor
146x102mm Form Factor with Wide Range 9~36V DC-In Power Design

GA-SBCAP3450 https://www.gigabyte.com/Motherboard/GA ... -rev-11#ov
Built-in Intel® Celeron™ N3450 (up to 2.2 GHz) quad-core processor
SBC 146x102mm Form Factor with Wide Range 9~36V DC-In Power Design

GA-SBCAP4200 http://www.gigabyte.ru/products/page/mb ... 0rev_10#kf
Built-in Intel® Pentium™ N4200 (up to 2.5 GHz) quad-core processor
SBC 146x102mm Form Factor with Wide Range 9~36V DC-In Power Design

Image
Image

elatllat
Posts: 1437
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 10 times
Been thanked: 28 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Fri May 10, 2019 9:24 pm

Sergey3 wrote:
Wed May 08, 2019 5:10 pm
...gigabyte...
is not selling those on its website or the first 2 online retailers the website suggests (Amazon, B&H). Looks like vaporware to me.

jit-010101
Posts: 31
Joined: Tue Mar 13, 2018 9:40 pm
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Sold Out

Unread post by jit-010101 » Sun May 12, 2019 12:16 am

wings wrote:
Mon Apr 15, 2019 5:06 pm
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
This Moosefs thing looks very cool! I am considering doing the same thing to replace an aging NAS. Can you answer a couple of simple questions about it?
Sure, I'd love to!
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
1. Give the setting of 2, how much usable capacity do you have?
I've got 12TB of disks "in service" (as technically my fourth node does not yet have a hard drive. I'll be purchasing one this weekend, bringing it up to 16TB of raw capacity).

Of that, I have about 11TiB of storage after overheads and conversion from TB to TiB etc. So 11TiB represents the "real" storage capacity, if that makes sense.

With a goal setting of 2, I could store roughly half of that, so 5.5TiB.

One of the really cool things about MooseFS, however, is that you can set goals on a folder-by-folder or file-by-file basis. This means you can do things like have the default copies setting be 2 (as in my cluster) but set special folders to have more or less copies. My ISO images folder is set to 1 copy as I don't care if I lose it, but my partner's music collection is set to 3 copies for added durability at the cost of a bit more disk usage.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
2. Do you need a full 4TB in the master server? It looks like it only holds Metadata and so might need significantly less.
The only reason the master server has a 4TB disk is because it also runs a chunkserver. The moosefs-master service and a moosefs-chunkserver service both live on it.
In a similar vein, the remaining servers run a moosefs-chunkserver service and *also* run a moosefs-metalogger service.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
3. I like the self healing idea; however, the master server seems to be a single point of failure. How hard would it be to recover that if it died?
Yes, the master is a single point of failure. However, the metaloggers running on the other nodes are designed to "follow" the master and each keeps a copy of the metadata set and tries to keep it up to date.

In the event of the master drastically failing, you can "promote" a metalogger to a master using its copy of the metadata (either by changing which IP the clients point to, or by moving the IP to the metalogger you are promoting).

In practice, recovering from a failed master is actually very easy using that method.

For what it's worth, maintenance in general is a breeze - when I add the fourth and final hard drive I'll be able to put it into service without taking down or interrupting MooseFS in any way. It's completely transparent most of the time.
jl_678 wrote:
Sat Apr 13, 2019 3:58 am
4. I am looking forward to your technical blog and so definitely share it
Thanks! I'm hoping to publish it in the next week or so but have been very busy.

Definitely let me know how you get on if you try it before my posts hit. If you want to talk more about Moose you can PM me on here, hit me on Twitter (@gnomethrower) or email me.
Thanks for sharing your experiences. I've also come accross this presentation here (comparing GlusterFS, Ceph, MooseFS and more):

https://www.slideshare.net/mobile/azili ... ed-storage

If you read that there's a stat benchmark in which MooseFS is 4x faster for a lot of small files then GlusterFS.

There's also an official Docker Plugin:

https://moosefs.com/blog/docker-volume- ... lable-now/

So maybe for a home Cluster with Kubernetes this might be the perfect FS for uptime and performance.

domih
Posts: 102
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, H2.
Has thanked: 31 times
Been thanked: 21 times
Contact:

Re: Re: Sold Out

Unread post by domih » Mon May 13, 2019 12:47 pm

elatllat wrote:
Fri May 10, 2019 9:24 pm
Sergey3 wrote:
Wed May 08, 2019 5:10 pm
...gigabyte...
is not selling those on its website or the first 2 online retailers the website suggests (Amazon, B&H). Looks like vaporware to me.
The motherboards Sergei refers to (semi-industrial mobos) were not on the market in the US but they were in Canada and as it seems in Europe/Russia. You can find of few references to them on eBay or if you dig deep into Google. This being said, there are Apollo Lake, not Gemini Lake. Most of them expect DDR3, have no nvme and so on, not surprising given these are the previous generation (circa 2016/17 vs end-2017). There is also the 2-core vs 4-core difference. You can compare the differences on Ark Intel.

IMHO, you better go finding j4105 (celeron) or j5005 (pentium) on eBay and add a UBS3 -> Internet adapter if 1Gbe x 2 is a requirement.

Funny fact: there are plenty of HP and Dell mini-boxes based on either one on sale on eBay. The HP ones come with 2 SO-DIMM slots (mobo manufacter is the same as the one for the Gibabyte Brix 4105), the DELL ones come with only one slot. So with HP, you can have 16GB or 32GB of RAM. With Dell you can't.

Again, the Odroid H2 configuration makes so much more sense to me anyway, let's hope that Intel cleans up its act with its partners by resuming Gemini Lake production sooner than later.

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Mon May 13, 2019 1:44 pm

I think we can start to sell the H2 again in mid-June.
I can confirm the schedule by the end of next week.
These users thanked the author odroid for the post (total 13):
odroidmame (Mon May 13, 2019 6:39 pm) • Trupik (Mon May 13, 2019 8:41 pm) • XRovertoX (Mon May 13, 2019 8:42 pm) • mad_ady (Mon May 13, 2019 9:03 pm) • tmihai20 (Mon May 13, 2019 9:12 pm) • rooted (Tue May 14, 2019 6:37 am) • Ameridroid (Tue May 14, 2019 9:11 am) • roots (Tue May 14, 2019 1:41 pm) • Txbirds (Tue May 14, 2019 8:29 pm) • bigbrovar (Wed May 15, 2019 12:22 pm) and 3 more users

User avatar
CarlitoxxPro
Posts: 1
Joined: Thu May 16, 2019 7:50 pm
languages_spoken: English, Spanish
Location: Spain
Has thanked: 1 time
Been thanked: 0
Contact:

Re: Sold Out

Unread post by CarlitoxxPro » Thu May 16, 2019 7:54 pm

That is a great news. :o

I have some questions, the H2 support WOL and PXE? can you provide some bios menu screenshots? @odrod

kindest regards.

User avatar
tobetter
Posts: 3786
Joined: Mon Feb 25, 2013 10:55 am
languages_spoken: Korean, English
ODROIDs: X, X2, U2, U3, XU3, C1
Location: Paju, South Korea
Has thanked: 29 times
Been thanked: 130 times
Contact:

Re: Sold Out

Unread post by tobetter » Thu May 16, 2019 10:17 pm

CarlitoxxPro wrote:
Thu May 16, 2019 7:54 pm
That is a great news. :o

I have some questions, the H2 support WOL and PXE? can you provide some bios menu screenshots? @odrod

kindest regards.
WOL is supported,
https://wiki.odroid.com/odroid-h2/appli ... ake_on_lan

For PXE,
viewtopic.php?f=168&t=33253

BIOS for H2 is not complicated...
https://magazine.odroid.com/article/odr ... te-access/
https://wiki.odroid.com/odroid-h2/hardw ... ios_update

User avatar
mad_ady
Posts: 6384
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, N1, H2, N2
Location: Bucharest, Romania
Has thanked: 146 times
Been thanked: 107 times
Contact:

Re: Re: Sold Out

Unread post by mad_ady » Fri May 17, 2019 4:22 pm

I can confirm that WOL works without installing a newer driver under Ubuntu 19.04:

Code: Select all

adrianp@lego:~$ uname -a
Linux lego 5.0.0-15-generic #16-Ubuntu SMP Mon May 6 17:41:33 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
adrianp@lego:~$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=19.04
DISTRIB_CODENAME=disco
DISTRIB_DESCRIPTION="Ubuntu 19.04"

So you can do only the steps from here: https://wiki.odroid.com/odroid-h2/appli ... ol_enabled
These users thanked the author mad_ady for the post:
tobetter (Fri May 17, 2019 4:25 pm)

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Fri May 17, 2019 5:05 pm

@mad_ady,
Glad to hear that Kernel 5.0 has the bug-fix by default.

jit-010101
Posts: 31
Joined: Tue Mar 13, 2018 9:40 pm
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by jit-010101 » Fri May 17, 2019 10:07 pm

odroid wrote:
Mon May 13, 2019 1:44 pm
I think we can start to sell the H2 again in mid-June.
I can confirm the schedule by the end of next week.
That's awesome ... what does that mean for the distributors? Will they recieve stock beforehand? Can we preorder?

elatllat
Posts: 1437
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2
Has thanked: 10 times
Been thanked: 28 times
Contact:

Re: Re: Sold Out

Unread post by elatllat » Fri May 17, 2019 10:42 pm

jit-010101 wrote:
Fri May 17, 2019 10:07 pm
...Will [distributors] receive stock beforehand? Can we preorder?
no, and no (well some distributors offer preorder).

RomaT
Posts: 213
Joined: Thu Oct 23, 2014 4:48 pm
languages_spoken: Russian
ODROIDs: -H2 rev.B, -XU3, -XU4, -C1, -C2, -W, -VU, CloudShell
Location: Perm, Russia
Has thanked: 4 times
Been thanked: 31 times
Contact:

Re: Re: Sold Out

Unread post by RomaT » Wed May 22, 2019 6:05 pm

odroid wrote:
Mon May 13, 2019 1:44 pm
I think we can start to sell the H2 again in mid-June.
I can confirm the schedule by the end of next week.
Will it be a new revision of the PCB?
Interested in supporting PCI-E SATA controller in slot M2.
eg IO-M2F585-5I or IO-M2F9235-4I

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Wed May 22, 2019 6:24 pm

@RomaT,
Yes, we tested the IO-M2F9235-4I on the new H2 rev-B samples and it worked well.
But we needed a large ATX PSU to run six HDDs including two native SATA ports on the H2.

BTW, Intel just confirmed the delivery schedule of the J4105 SoC.
So we are checking our production schedule as well as other raw material stuff.
We will share the H2 re-launching schedule early next week.
I hope we can start to ship the H2 boards four~five weeks later.
These users thanked the author odroid for the post (total 4):
madanra (Wed May 22, 2019 6:27 pm) • mad_ady (Wed May 22, 2019 6:45 pm) • powerful owl (Wed May 22, 2019 6:57 pm) • tmihai20 (Wed May 22, 2019 9:07 pm)

RomaT
Posts: 213
Joined: Thu Oct 23, 2014 4:48 pm
languages_spoken: Russian
ODROIDs: -H2 rev.B, -XU3, -XU4, -C1, -C2, -W, -VU, CloudShell
Location: Perm, Russia
Has thanked: 4 times
Been thanked: 31 times
Contact:

Re: Re: Sold Out

Unread post by RomaT » Wed May 22, 2019 6:32 pm

odroid wrote:
Wed May 22, 2019 6:24 pm
ATX PSU
By the way, how about power supply from +12V ?
because +15V power supply is very inconvenient

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Wed May 22, 2019 6:47 pm

If you don't use any 3.5inch HDD, 12Volt PSU will be fine.
If you want to connect six 3.5inch HDDs, you need a 200Watt PSU probably.
Oops! Too much off topic.
Make another topic if you need further discussion for building a six bay NAS system.

Let's keep talking about when the H2 is available again.
I will share the H2 Rev-B information several days later.
These users thanked the author odroid for the post:
roots (Wed May 22, 2019 7:25 pm)

RomaT
Posts: 213
Joined: Thu Oct 23, 2014 4:48 pm
languages_spoken: Russian
ODROIDs: -H2 rev.B, -XU3, -XU4, -C1, -C2, -W, -VU, CloudShell
Location: Perm, Russia
Has thanked: 4 times
Been thanked: 31 times
Contact:

Re: Re: Sold Out

Unread post by RomaT » Wed May 22, 2019 6:54 pm

IMHO powered additional drives, you should not care, not who is not going to feed them through the PCB...
if the user will use them, then the power will be directly from the ATX power supply ...

jasonhurd
Posts: 6
Joined: Wed Sep 07, 2016 11:52 am
languages_spoken: english
ODROIDs: C2 : H2
Has thanked: 1 time
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by jasonhurd » Thu May 23, 2019 10:35 am

odroid wrote:
Wed May 22, 2019 6:47 pm
H2 Rev-B information
Oh.. Is Rev2 going to be available when the H2 is back in stock?

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Thu May 23, 2019 10:36 am

jasonhurd wrote:
Thu May 23, 2019 10:35 am
odroid wrote:
Wed May 22, 2019 6:47 pm
H2 Rev-B information
Oh.. Is Rev2 going to be available when the H2 is back in stock?
Yes, we are preparing the production with Rev-B materials.
These users thanked the author odroid for the post:
tmihai20 (Fri May 24, 2019 3:32 am)

jasonhurd
Posts: 6
Joined: Wed Sep 07, 2016 11:52 am
languages_spoken: english
ODROIDs: C2 : H2
Has thanked: 1 time
Been thanked: 0
Contact:

Re: Re: Sold Out

Unread post by jasonhurd » Thu May 23, 2019 11:12 am

That is great to hear! Thank you for the information.

puremind
Posts: 45
Joined: Wed Nov 21, 2018 2:27 am
languages_spoken: english
ODROIDs: Odroid H2 Rev B
Has thanked: 2 times
Been thanked: 8 times
Contact:

Re: Sold Out

Unread post by puremind » Fri May 24, 2019 1:37 am

Any chance of 4 SATA ports and/or hardware RAID support?

That would make H2 to be the NAS killer combo ...
Odroid H2 Rev B, 16GB Ripjaws, MP510 Corsair 512GB Nvme

RomaT
Posts: 213
Joined: Thu Oct 23, 2014 4:48 pm
languages_spoken: Russian
ODROIDs: -H2 rev.B, -XU3, -XU4, -C1, -C2, -W, -VU, CloudShell
Location: Perm, Russia
Has thanked: 4 times
Been thanked: 31 times
Contact:

Re: Sold Out

Unread post by RomaT » Fri May 24, 2019 1:42 am

puremind wrote:
Fri May 24, 2019 1:37 am
Any chance of 4 SATA ports and/or hardware RAID support?
IOCREST IO-M2F9230-4IR

puremind
Posts: 45
Joined: Wed Nov 21, 2018 2:27 am
languages_spoken: english
ODROIDs: Odroid H2 Rev B
Has thanked: 2 times
Been thanked: 8 times
Contact:

Re: Re: Sold Out

Unread post by puremind » Fri May 24, 2019 3:42 am

Are you using one with the H2?
Would you have performance indication?
Thanks
Odroid H2 Rev B, 16GB Ripjaws, MP510 Corsair 512GB Nvme

puremind
Posts: 45
Joined: Wed Nov 21, 2018 2:27 am
languages_spoken: english
ODROIDs: Odroid H2 Rev B
Has thanked: 2 times
Been thanked: 8 times
Contact:

Re: Re: Sold Out

Unread post by puremind » Tue May 28, 2019 12:37 am

odroid wrote:
Thu May 23, 2019 10:36 am
jasonhurd wrote:
Thu May 23, 2019 10:35 am
odroid wrote:
Wed May 22, 2019 6:47 pm
H2 Rev-B information
Oh.. Is Rev2 going to be available when the H2 is back in stock?
Yes, we are preparing the production with Rev-B materials.
Hi. Would you have an estimation when the preorder window will open for H2 rev B?
Odroid H2 Rev B, 16GB Ripjaws, MP510 Corsair 512GB Nvme

User avatar
rooted
Posts: 6588
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 88 times
Been thanked: 17 times
Contact:

Re: Sold Out

Unread post by rooted » Tue May 28, 2019 12:57 am

Start a new topic about the revision, odroid was trying to keep this about availability.

Done (revision discussion):

viewtopic.php?t=35165

User avatar
odroid
Site Admin
Posts: 31848
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English
ODROIDs: ODROID
Has thanked: 89 times
Been thanked: 253 times
Contact:

Re: Re: Sold Out

Unread post by odroid » Fri May 31, 2019 3:54 pm

The ODROID-H2 Rev B will have these minor changes.
<1> CLK-REQ signal pull resistor is increased to support PCIe-to-SATA bridge board
<2> 12V SATA power circuit on the H2 board improvement for high power consumption HDDs like HGST Ultrastar 7K4000 series and Seagate IronWolf series.
<3> Add a M.2 screw for NVMe storage installation since many users had a hard time to find a proper screw to fasten their NVMe SSD.
<4> Pre-install the latest BIOS 1.05

You can find the Rev B schematics and PCB pictures in this link.
https://wiki.odroid.com/odroid-h2/hardw ... schematics

The BIOS revision history is available here. You can use the latest BIOS on the first batch Rev 0.1 too.
https://wiki.odroid.com/odroid-h2/hardw ... on_history

We ordered this ITX case to make a build instruction for DIY 6-bay NAS as @tobetter mentioned. Delivery may take two or more weeks from China to Korea.
https://www.aliexpress.com/item/HCiPC-6 ... 19894.html
Once we receive it, we will write a Magazine article with several pictures.

We will start selling and shipping the ODROID-H2 Rev B from June 26th if everything goes well.
These users thanked the author odroid for the post:
elatllat (Mon Jun 17, 2019 12:42 pm)

Post Reply

Return to “General Topics”

Who is online

Users browsing this forum: No registered users and 0 guests