QNAP Mustang-F100-A10-R10 Accelerator Card
$2,977.00 inc. GST
Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA
Out of stock
Description
QNAP Mustang-F100-A10-R10 Accelerator Card
As QNAP NAS evolves to support a wider range of applications (including surveillance, virtualization, and AI) you not only need more storage space on your NAS, but also require the NAS to have greater power to optimize targeted workloads. The QNAP Mustang-F100-A10-R10 is a PCIe-based accelerator card using the programmable Intel® Arria® 10 FPGA that provides the performance and versatility of FPGA acceleration. It can be installed in a PC or compatible QNAP NAS to boost performance as a perfect choice for AI deep learning inference workloads.
>> Difference between Mustang-F100 and Mustang-V100.
If you are focusing on video processing only, we highly recommend you Mustang V100 which features low power consumption. If you need to process audio and signal at the same time, Mustang F100 would fits your demand
- Half-height, half-length, double-slot.
- Power-efficiency, low-latency.
- Supported OpenVINO™ toolkit, AI edge computing ready device.
- FPGAs can be optimized for different deep learning tasks.
- Intel® FPGAs supports multiple float-points and inference workloads.
OpenVINO™ toolkit
OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.
It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.
Get deep learning acceleration on Intel-based Server/PC
You can insert the QNAP Mustang-F100-A10-R10 into a PC/workstation running Linux® (Ubuntu®) to acquire computational acceleration for optimal application performance such as deep learning inference, video streaming, and data center. As an ideal acceleration solution for real-time AI inference, the Mustang-F100 can also work with Intel® OpenVINO™ toolkit to optimize inference workloads for image classification and computer vision.
- Operating Systems
Ubuntu 16.04.3 LTS 64-bit, CentOS 7.4 64-bit, Windows 10 (More OS are coming soon) - OpenVINO™ toolkit
- Intel® Deep Learning Deployment Toolkit
- – Model Optimizer
- – Inference Engine
- Optimized computer vision libraries
- Intel® Media SDK
*OpenCL™ graphics drivers and runtimes. - Current Supported Topologies: AlexNet, GoogleNet, Tiny Yolo, LeNet, SqueezeNet, VGG16, ResNet (more variants are coming soon)
- Intel® FPGA Deep Learning Acceleration Suite
- Intel® Deep Learning Deployment Toolkit
- High flexibility, Mustang-F100-A10 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
*OpenCL™ is the trademark of Apple Inc. used by permission by Khronos
Additional information
Dimensions | 198 × 288 × 84 mm |
---|
You must be logged in to post a review.
Reviews
There are no reviews yet.