With the release of Core ML by Apple at WWDC 2017, iOS, macOS, watchOS and tvOS developers can now easily integrate a machine learning model into their app. This enables developers to bring intelligent new features to users with just a few lines of code. Core ML makes machine learning more accessible to mobile developers. It also enables rapid prototyping and the use of different sensors (like the camera, GPS, etc.) to create more powerful apps than ever.
Members of the MXNet community, including contributors from Apple and Amazon Web Services (AWS), have collaborated to produce a tool that converts machine learning models built using MXNet to Core ML format. This tool makes it easy for developers to build apps powered by machine learning for Apple devices. With this conversion tool, you now have a fast pipeline for your deep-learning-enabled applications. You can move from scalable and efficient distributed model training in the AWS Cloud using MXNet to fast run time inference on Apple devices.
One of the savvier comments made following the introduction of Core ML was that Apple had just invented the PDF of AI. There may be a lot more nuance to it than that, but with moves like Amazon's new converter, it's becoming clearer that Core ML is making machine learning more accessible and interoperable.
It's good news for Apple but great news for everyone working on ML, especially for iOS apps.
We may earn a commission for purchases using our links. Learn more.