[MinnowBoard] Minnowboard and a standard Nvidia GPU?

Berth-Olof Bergman bo.bergman at winzenttech.com
Thu Jul 16 07:07:10 UTC 2015


The bottleneck in todays graphics is not hardware as the GPU is idling most of the time. You may argue that this is not the case, as you get better results with more lanes and higher bus speeds.

The reason for the performance boost for high speed, multiple lanes is that transfer between the CPU and GPU goes faster, leaving more execution time to the bloatware in the computer to do their stuff.

However, the bottom line is that good software can increase the performance of feeding the GPU more than 25 times faster, making up for the ”bottleneck of the hardware”. I was ironical here.

Let’s think about this for a while. For example, hardware takes giant steps in evolution to get faster and faster. Ethernet has gone from 10Mbit to 100Mbit, 100Mbit to 1 Gbit, from 1 Gbit to 10 Mbit. How about the software? It travels in the opposite direction.

Luckily for all these bloatware contributors, Intel has done real magic making there CPUs unbelievable fast, so despite the total lack of software performance it works quite good.

However, writing software the right way can actually increase performance 20 to 100 times, depending on there task. If you can keep the GPU from idling you will get a lot of performance out of Minnowboard Max with the built in GPU.

It doesn’t matter how fast the hardware made by Intel and other vendors are. It takes two to tango, so good software is the key. This will not happen in the open source community as there are too many involved. It always tend to be quantity instead of quality.

Quantity is not the answer to high performance. Quality is. Everything counts, so every single peace in the machinery needs to perform. You wouldn’t build an engine on a race card with a mixture of low and high quality components, would you?

Best regards,

B-O 

  
> 16 jul 2015 kl. 04:01 skrev Scott Guthrie <scottgu3 at gmail.com>:
> 
> You know...this type of thing has been tickling my brain lately.  I wasn't thinking Minnowboard though...but a NUC (i7 or i5) would seem to have enough oomph to do this.  Sadly they seem to be limited to 12 PCIe lanes and 4x1 or 2x4 lane configurations.
> 
> And frankly, I can surmise this isn't really in the INTEL roadmap (after all a NUC is not really MEANT to be a Gaming PC), but damn, one x16 Slot, even if I had to through a slot extender to a second case and power the GPU separately.  
> 
> Once can dream I suppose.
> 
> Sorry....Tangent.
> 
> S.
> 
> 
> 
> On Wed, Jul 15, 2015 at 8:04 PM, John 'Warthog9' Hawley <warthog9 at eaglescrag.net <mailto:warthog9 at eaglescrag.net>> wrote:
> On 07/15/2015 08:42 AM, Ashot Arshakyan wrote:
> > Hello. I have a project where I need to render thousands of OpenGL
> > frames each second by using a x86 based PC and a typical high end NVidia
> > GPU.
> > The problem is I have a pretty limited space for the electronics of the
> > device I'm working on so I was hoping I could use a single board PC
> > instead of using a mini ATX.
> > SInce the main rendering job is done on the GPU, I thought the
> > Minnowboard CPU would be fast enough for the job. What are your thoughts?
> > Now I just need to know how I can connect a standard GPU to Minnowboard.
> > I'm thinking of powering the GPU with a flat 12V 30A power supply
> > instead of ATX to again save some space.
> 
> So there's a couple of things you should be aware of while pondering
> this.  It *MIGHT* work, but you'll need to consider that modern graphics
> cards use a LOT of bandwidth for moving data in and out of the graphics
> card.  They usually employee 16 lanes of PCI-e, which at it's oldest and
> slowest is about 4GByte/s.  So there's two potential issues here:
>         1) The MinnowBoard only has a single lane of PCI-e v2
>            available for use through the high speed expansion
>            header.  This gives you a maximum throughput of
>            about 500MByte/s
> 
>         2) *MANY* graphics cards won't negotiate their PCI-e
>            lanes down to the point where it will work in a x1
>            connector.  This is mostly an assumption from the
>            manufacturers that the card would never be used in
>            such a slot.  Case in point when we tried to get a,
>            admittedly rather high end, graphics card working on
>            the MAX on April 1st this year, it didn't work as
>            the card couldn't negotiate down to a single PCI-e
>            lane
> 
> The only way to overcome 2, that I can think of, is to put a PCI-e
> switch on the bus that can provide 16 lanes, but only upstream via 1.
> This still limits your maximum bandwidth to 500MByte/s though, but it
> should get you past the lane negotiation issue.  That's a substantial
> amount of work though, and likely a need for a custom lure to be built
> to accommodate it.  Just some thoughts.
> 
> - John 'Warthog9' Hawley
> _______________________________________________
> elinux-MinnowBoard mailing list
> elinux-MinnowBoard at lists.elinux.org <mailto:elinux-MinnowBoard at lists.elinux.org>
> http://lists.elinux.org/mailman/listinfo/elinux-minnowboard <http://lists.elinux.org/mailman/listinfo/elinux-minnowboard>
> 
> _______________________________________________
> elinux-MinnowBoard mailing list
> elinux-MinnowBoard at lists.elinux.org
> http://lists.elinux.org/mailman/listinfo/elinux-minnowboard

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.elinux.org/pipermail/elinux-minnowboard/attachments/20150716/31535dac/attachment.html>


More information about the elinux-MinnowBoard mailing list