NVidia recently released a new entry-level low-power AI for inferencing on the edge. Positioned right above the Nano and below the Xavier NX. Seems like the right decision. Nano was a bit too entry-level and Xavier NX was a bit of an overkill for small projects. This one seems to be just right.
Another great thing about the TX2 NX is that it is using the same 260pin connector as the Nano and the Xavier NX, allowing for a broader spectrum of upgradability.
Twice the CUDA cores of the Nano and bumped the GPU architecture to Pascal.
According to NVidia it is 2.5x faster than the Nano and pricewise it’s about 200$.
Excited to play with it.