Update: since Kotlin/Native 0.5, the following article is now outdated. I’ll be keeping its content for reference.
Kotlin Native 0.4 is out since the beginning of November and, with it, some nice sample iOS demo apps to be run into your favourite iOS device. That sounds amazing, even for some hardcore Swift fans like me. Yet, there’s a catch: the sample apps can only run on a real iOS device but not in the Simulator. Sure, that’s just an unsignificant drawback compared to what the Kotlin Native team achieved so far. Still, I’m a huge fan of the iOS Simulator, as it’s by far the fastest way to do some quick testing of an app while doing a pretty good job in mimicking the behaviour of an app on a real device.
So, as a way to understand the Kotlin Native project, I decided to try supporting the Simulator on a Kotln Native iOS app.
In order to do this, I started from the UIKit project sample found in the samples dir of the main repo and tried to make it run in the sim.
In this article we will discover how to use the Swift programming language to write software running on our Raspberry PI. We’ll be reading and writing through the GPIO of our board, connecting a number of widely available components and, at the same time, interacting with a remote sever.
Here, I’ll be covering how to get started with Swift on a Raspberry PI board.
One of the WWDC 2015 announcements that surprised interested us the most has definitely been the support for code coverage for the Swift language.
In this article we will understand the advantages of the new code coverage functionality introduced in Xcode 7 and how to integrate such KPI in our daily work.
Code coverage is a metric that measures the value of our tests by identifying what code is executed when running them and, above all, what portions of our project are untested.
How it works?
The production of code coverage information is done in two passes:
At compile time, the compiler prepares the files for analysis
At runtime, the lines of code affected by the tests are annotated in a specific file
Xcode code coverage before June 2015
Before WWDC 2015, only the Objective-C code coverage was supported by Apple’s tools, while Swift had been left behind. Also, the Objective-C support was sometimes inconsistent and required a few tricks to get the information.
How did it work?
The procedure necessary to retrieve the information was a variant of the one used by gcov, included in the gcc tools. Two settings had to be added to the Build Settings:
Generate test coverage files, which corresponds to the
-ftest-coverage gcc flag
Instrument program flow, corresponding to
The former allows the creation of the .gcno files, which contain the information needed to build the execution graph and reconstruct the line numbers.
The latter, Instrument program flow, deals with the creation oft the .gcda files, which contain the number of transitions on different arcs of the graph and other summary information.
In order to force the generation of these command line data, it was possible to use the following command:
Exploiting the data
A number of tools for reading and exporting reports exist, but in particular, we used the following:
Coverstory, a GUI tool to read files “.gcda” and “.gcno”
gcovr translates these files Cobertura XML format
lcov generates a visual report in a navigable HTML
and, especially, Slather.
Slather, developed by the SF-based company Venmo, exports the code coverage data in a number of different formats, including Gutter JSON, Cobertura XML, HTML and plain text. In addition, it integrates easily with other platforms, such as Jenkins, Travis CI, Circle CI and coveralls.
As said, one of the major assets of Slather is its ease of configuration and integration within a continuous integration system. Slather is open source, and available here at Venmo’s GitHub repo.
A recent tool for collecting Swift coverage information is Swiftcov, developed by the guys at Realm. Swiftcov makes heavy use of LLDB breakpoints to detect which lines are affected by the execution of our tests.
Code coverage after June 2015
During the WWDC 2015 keynote, Apple announced that Xcode 7 would introduce support of code coverage for our beloved Swift.
How does it work?
A completely new format has been introduced, named profdata, thus making
gcov legacy – at least for what concerns projects developed with Apple’s development tools.
In other words, starting from the very first beta of Xcode 7, profdata is intended to replace completely gcov for both Swift and Objective-C.
In order to enable the setting, from Xcode 7, you will need to access the “scheme” setting and, in the “Test” tab, to tick the “Gather coverage data” checkbox.
As for the command line, xcodebuild now ships with a new parameter,
-enableCodeCoverage, which can be used as follows:
Once the tests run, coverage information is immediately available in Xcode, on the right side of the code editor (see image below) and, in particular, in the “Report Navigator”.
The Report Navigator shows in detail which classes are covered by our tests and, by expanding the selection, which methods are actually used.
Exploiting the data
Apple’s work hasn’t only consisted in enhancing Xcode but, also, in extending the features of the
llvm-cov command line tool, which allows working with .profdata format.
llvm-cov show command, for instance, allows exporting plain text coverage information and outputs annotated source code files, which can be easily read and processed.
A recent Pull Request allows Slather to work with profdata files and convert them to other formats, thus enabling the integration with the other platforms supported by the tool.
If you are thinking about setting up an automated integration system, aside from the excellent Jenkins, Travis or Circle CI, it is perhaps time to start taking into consideration Xcode Server, which is part of the OS X Server bundle, distributed free of charge by Apple.
With the new version of Xcode bots and Xcode Server, it is now possible to support code coverage values and to display the results in a Web browser. The reports are also available in the “Big Screen” presentation, useful for presenting your content in a simplified yet effective overview.
In order to enable this workflow, you could follow the steps below:
Install OS X server
Enable “Xcode” and “Websites” services
Create a new project and assign a Source Control Manager to it (such as git)
In Xcode, create an Xcode bot under “Product > Create Bot”
Select the frequency of integration and enable code coverage (see image below) next to the caption “Code Coverage”.
Launch an integration
Open the web browser at the host indicated by your instance of Mac OS.
Code coverage is very useful to keep under control your code base health status. Although it can not replace your developer confidence in well designed, well structured apps, this metric can help write better code by encouraging you to give yourself concrete goals day by day.
Also, and finally, the new tools offered by Apple can now allow you to keep under control these values in minutes, with a simple and immediate configuration.
Even if I am not involved in the project, I believe Carthage has some great potential, I really like the minimalist approach and, in particular the fact that developers can keep control over what really happens when you add an external dependency.
The idea of using committed xcodeprojs to retrieve informations about the build is quite good, even if it obviously requires to have a shared scheme – which it’s not the case for the majority of project so far.
I’m looking forward to see more and more libraries supporting Carthage though even if for now, at least for client work, I’ll stick with CocoaPods.
I somehow forgot to publish the slides on this blog. You can get them here.
It’s been fun discussing with people about the subject. I think there is still a long way to go before having this integrated in the majority of the applicable projects, mostly because the setup process needs to be automated. Still, I believe it’s a step in the right direction.
For what concerns iOS my colleagues and I have been using Calabash-iOS for a year now, with mixed feelings.
Here is a totally subjective opinion about Pros and Cons of Calabash-iOS
Conciseness of the Gherkin language
Capability of querying webviews with CSS selectors
Access to all the object property values via Ruby
built-in Jenkins-Ready output (xml and HTML test reports, mainly)
Performance (on <= iOS 6)
It does not uniquely rely on accessibilityIdentifiers (as KIF does)
It’s not an Apple-backed project : functionalities change significantly from OS to OS
Test fail randomly under certain circumstances and, in particular, when dealing with repeated scrolling on a UIScrollView (due to a poor implementation of the scroll/swipe functions)
On iOS 7, it relies on UIAutomation, thus…
…terrible performance on iOS 7 (see above)
Once again, quite a Pain-in-the-ass to make it work on iOS 7
Finding the good query for your element usually requires much trial-and-error via the calabash-ios console (Frank provides a nice UI tool for that task, but it’s not merged into Calabash-iOS yet)
Truth to be told, at this stage, the performance impact of relying on UIAutomation is deal breaker for me, so, on the next project we’ll be using KIF, which appears to be also used in some Google projects.
After upgrading to XCode 5 my Jenkins Continuous Integration machine over at Xebia stopped executing command-line GHUnit tests under some apparently random conditions. The console output was as follows:
***Terminating app due touncaught exception'NSInternalInconsistencyException',reason:'Unable to start status bar server. Failed to check into com.apple.UIKit.statusbarserver: unknown error code'
The reason of the exception looks pretty much the same I had to deal with some months ago (and solved in a previous post).
Well, it turns out that after the XCode upgrade, after quitting an iOS 5.x or iOS 6.x simulator instance, the SpringBoard daemon (along with many others) does not get removed, thus preventing our test target to instantiate a status bar. Interestingly enough, this behaviour does not occur when quitting an iOS 7.0 simulator.
That said, in order to fix the issue, I added the following line to the RunTests.sh script: