Mjølner Architect Attends Worldwide Developers Conference 2017
Two weeks ago, I attended the Apple conference WWDC in San Jose, California. WWDC stands for ‘Worldwide Developers Conference’ and as the name implies, it mainly targets developers on Apple’s platforms. However, the conference is also well-known (and hyped) for the keynote, where Apple has made it a tradition to launch new hardware as well as a new version of all of its operating systems for Mac, Apple TV, Apple Watch, and, of course, iPhone.
This year was no exception: there were launches of a new iPad, iMacs, MacBooks, as well as a new version of all operating systems. But there was also a brand new hardware product, the HomePod – a Siri enabled speaker, that can detect the space around it and use this information to optimize the music playback.
As an iOS developer, I found most interest in the changes in the developer tools and the iOS platform. Here, the session called ‘State of the Union’, which is always right after the keynote, presented huge news for developers: Xcode has gotten a complete overhaul with speed and stability improvements, wireless debugging, and support for multiple simultaneous simulators.
Best of all, it also contains many new refactoring possibilities – one of the best things that could happen to this IDE (Integrated Development Environment) in my opinion.
Regarding iOS, there were several interesting updates as well. I would especially like to highlight the built-in Augmented Reality engine, the vision framework, the machine learning framework, and finally drag-and-drop support on iPad (the first beta version also has drag-and-drop support in some apps on the iPhone, but I suspect that this is unintended, and it will most likely be disabled before launch).
I am especially looking forward to the new possibilities that the augmented reality engine gives developers, and I expect that some really cool games will be released the following year. But augmented reality is also very interesting for production companies, who will now be able to view and walk around their 3D models in the real environment.
If you want to read more about what was introduced at WWDC, raywenderlich.com has a really nice overview here.
My impressions from WWDC
In general, being at a big conference like WWDC was a great experience. All sessions can be streamed from any computer, but there are several aspects of the conference, that cannot be experienced from the couch at home.
For example, the excited atmosphere – especially at the keynote and the “state of the union” presentations. Also, it was interesting to see how Apple handles 5000 excited developers in queue for the keynote. Furthermore, it was inspiring to hear Michelle Obama’s take on how everybody has an obligation to improve the world by starting in our own neighbourhoods.
But the biggest difference between being at the conference and staying home is the possibility to talk to Apple’s own engineers. WWDC has a large setup of 10-15 different labs with shifting subjects. At any given time during the day, there were probably close to 200 Apple engineers in the labs, ready to help with any development problem that the conference attendants had experienced.
With almost 100 different subjects for the labs, it took some time to identify which labs were interesting for the exact problems, that we have struggled with. I had spent many hours before the conference, identifying the problems that we at Mjølner had struggled with previously.
One of the things that we have spent a lot of time on the past months, is to split our project into frameworks, and this has given us several problems. As an example, when using the UIImageView class in the interface builder, there is no possibility to specify which framework to load the image from. But since there was no frameworks lab, I chose to go to the UIKit lab instead (the UIImageView class is part of the UIKit framework).
When arriving to a lab, there are 3-5 Apple engineers standing in the door. Their sole purpose seems to be to lead people on to the expert in the area. They first guided me to the engineer that had written UIImageView (yes, a better expert would be hard to find). He quickly identified that it was only the people behind the interface builder that could fix this problem, and he therefore guided me on to one of the engineers that has written the interface builder.
The interface builder engineer agreed that this was an actual use case, and encouraged me to file a bug report. Personally, I was a bit afraid that it lost attention when it got into Apples huge bug reporting system, so I pushed a little bit to see if we could find a solution that worked for us.
He then guided me to an engineer that writes “IBDesignable” – a tool that lets classes draw content directly in the interface builder. Together with him, I wrote an IBDesignable class that can load an image from a separate framework and show it in the app and in the interface builder.
All in all, I went to the labs with four different problems. Problems that I had struggled with for hours without finding feasible solutions. Some were resolved immediately. Some required me to file bugs to Apple to get them solved. But in general, Apple’s engineers were very helpful in finding solutions like described above. Therefore, the labs alone make the long trip worth every penny and every hour of lost sleep 🙂