TensorFlow Lite Core ML delegate enables faster inference on iPhones and iPads
DRANK

02, 2020 — Posted by Tei Jeong and Karim Nosseir, Software EngineersTensorFlow Lite offers options to delegate part of the model inference, or the entire model inference, to accelerators, such as the GPU, DSP, and/or NPU for efficient mobile inference. On Android, you can choose from several delegates: NNAPI, GPU, and the recently added Hexagon delegate. Previously, with Apple's mobile devices — iPhone…Posted by Tei Jeong and Karim Nosseir, Software EngineersTensorFlow Lite offers options to delegate part of the model inference, or the entire model inference, to accelerators, such as the GPU, DSP, and/or NPU for efficient mobile inference. On Android, you can choose from several delegates:NNAPI,GPU, and the recently addedHexagondelegate. Previously, with Apple's mobile devices — iPhones and iPads — the only option was the GPU delegate.When Apple released its machine learning frameworkCore MLand Neural Engine (a neural processing unit (NPU) in Apple's Bionic SoC) this allowed Tens…

blog.tensorflow.org
Related Topics: Deep Learning iOS