Apple’s announcement introducing ARKit for IOS, earlier this week, marks a major development in the software industry. It is the tech giant’s first major step in actively competing in the augmented reality innovation space.
By opening these tools up for the iOS development industry, at large – Apple just enabled a whole new dimension of product development for one of the world’s most widely used operating systems. The company may not be too far off in their claims of ARKit being ‘the largest AR platform in the world.’
Needless to say, the team at CQL is very excited about the potential impact of leveraging these tools to build solutions for our customers.
ARKit in a Nutshell
It is a new platform for developers to help them bring high-quality Augmented Reality (AR) experiences to iPhone and iPad, using the built-in camera, powerful processors and motion sensors in iOS devices. ARKit allows developers to tap into the latest computer vision technologies, to build detailed and compelling virtual content on top of real-world scenes for interactive gaming, immersive shopping experiences, industrial design and more.
Tom Stoepker, our Development Director at CQL, commented, ‘A major takeaway from this announcement is that Apple, one of the most significant influencers in digital product development, is not only affirming that the current trend of spatially aware and immersive software is the future, but have doubled down by providing third party developers the tools to develop rich AR apps with significantly less overhead in terms of time and complexity.’
Tom also noted that this move by Apple could be compared to the era of when Google and Apple provided Map and Geolocation API/SDKs for developers to leverage. ‘The impact of releasing those tools to the software development industry at large was a prerequisite for companies like Uber, Airbnb, Waze, and Foursquare to even exist.’
Releasing tooling like ARKit could very well be the beginning of another wave of innovation that could bring about a major shift in digital products, similar to how Maps and Geolocation did years ago.
Also Released Core ML (Core Machine Learning)
For those deep into code like us, we want to highlight a less publicized SDK announcement, but also very relevant – Apple released Core ML (Core Machine Learning).
This feature will help enable machine learning models, like image recognition and natural language processing, on devices such as smart phones. Apple is promoting this new SDK to have the capability to do on-device analysis, which opens up possibilities for smarter software even when your phone is not connected to the internet.
Coupled with API’s, they provide access to already tested facial recognition analysis and text processing. Building highly intelligent software does not require Apple-sized budgets for upfront data science and research.
Stay tuned for more information and tutorials on how to use these NEW developments.