NVIDIA UNVEILS THE NEW AI BRAIN FOR MACHINES AND A CHEAPER JETSON TX2

Nvidia, is one of those companies that never ceases to amaze me. I have to say that they have technically monopolized the GPU ecosystem and now Nvidia is gaining (if not already) for a new crown ā€“Ā AI on the Edge.

AI in the cloud has been the buzz ever since, enabling numerous application to be able to achieve Artifical intelligence capabilitiesĀ without much effort. Of course, this is a good idea, and multiple players are gaining for this. I believe cloud computing is great ā€“ but it alone isnā€™t enough. So many things come into place here, centralization, privacy (who really controls the data), security, and most crucial physics (there is a limit of the data we can receive and push). ButĀ AI on the EdgeĀ brings a different dimension to this, you indeed have control of so many things and most importantly build to your imagination.

With AI on the Cloud, we put critical assets and infrastructure at risk of attack and might not even get the performance we geared for especially in the case of autonomousĀ machines. However, Nvidia has been pushing for edge infrastructureĀ to enable AI on the Edge with its numerous arrays of AI enabled hardware, but the unveiling of theĀ Jetson AGX Xavier ModuleĀ will be a total game changer. It promises to give robots and other intelligent machines the processing juice they will ever need for their AI brains.

Nvidia announced theĀ Xavier platform earlier this yearĀ and has now added the AGX to the Jetson Xavier name. The AGX Xavier module is built around theĀ Xavier system-on-chipĀ which relies on six processors to get its work done, includingĀ a 512-core Nvidia Volta Tensor Core GPU, an eight-core Carmel Arm64 CPU, two NVDLA deep-learning chips, and dedicated image, vision, and video processors. Here comes the impressive part of it; theĀ AGX Xaviermodule delivers up to 32 TOPS (30 trillion computing operations per second) of accelerated computing capability while consuming under 30 Watts. Thatā€™s more thanĀ 20X the performance and 10X the energy efficiency of theJetson TX2.Ā Users can configure operating modes at 10W, 15W, and 30W as needed.

NVIDIA UNVEILS THE NEW AI BRAIN FOR MACHINES AND A CHEAPER JETSON TX2 1

The module is available with a BSP with Nvidiaā€™s Linux4Tegra stack, and as previously announced, Nvidia also offers an AI-focusedĀ Isaac SDK. The AGX Module is not designed for the everyday user though, and it costs $1,099 each in batches of 1,000 units.Ā TheĀ AGX Xavier ModuleĀ can handle visual odometry, sensor fusion, localization and mapping, obstacle detection, and path planning algorithms making it ideal for high demanding robots and other autonomous machines that need a lot of performance with relative low power use.

NVIDIA UNVEILS THE NEW AI BRAIN FOR MACHINES AND A CHEAPER JETSON TX2 2

Joining the Jetson Family is the newly introducedĀ Jetson Tx2 4GBĀ which is expected to be cheaper than the 8GB Jetson TX2.Ā Like the earlier TX2 modules, the 4GB version features 2x high-end ā€œDenver 2ā€ cores and 4x Cortex-A57 cores. You also get the 256-core Pascal GPU with CUDA libraries for running AI and machine learning algorithms. Basically the same Jetson TX2, with low memory specs.

The Nvidiaā€™s Jetson TX2 4GB is expected to be available in June 2019 and pricing is rumored to be around $299 for 1000 volumes.Ā The Jetson AGX Xavier is available now starting at $1,099 per module for 1,000-plus purchases. More information may be found inĀ Nvidiaā€™s announcement, and itā€™sĀ Jetson TX2 4GB product page.

About The Author

Ibrar Ayyub

I am an experienced technical writer holding a Master's degree in computer science from BZU Multan, Pakistan University. With a background spanning various industries, particularly in home automation and engineering, I have honed my skills in crafting clear and concise content. Proficient in leveraging infographics and diagrams, I strive to simplify complex concepts for readers. My strength lies in thorough research and presenting information in a structured and logical format.

Follow Us:
LinkedinTwitter