Search code examples
iosxcode7ios-ui-automationxcode-instruments

iOS UI Automation: Monitor log for signals


I am working on setting up an Xcode/Instruments UI Automation project for the purpose of capturing screenshots of my app at various states automatically.

One of the major issues with this approach is timing. Because the app communicates with a server, the time that it takes for certain events to occur varies (sometimes quite a lot) from run to run. Using delays is far from ideal, as it inflates the time to perform the screenshot capture (and we have to run this about 280 times so it's going to add up) and still doesn't guarantee that the app is in the correct state (for example, we can't guarantee the server will return in 5 seconds, but most of the time it should be < 1 sec).

So my thinking is that an ideal solution would be to insert benign log statements into the actual app itself that could be monitored by the UI Automation script. For example, anytime the script detects "!!SCREENSHOT!!" in the log, it could snap another screenshot. This way we can use programmatic constructs to make sure the app is in exactly the right state for a screenshot, and cut down on the overall execution time of the script by avoiding delays.

My question is first of all is this possible? And if so, how? If not, are there any other ideas others have come up with to solve this problem?


Solution

  • Well I couldn't find anything about monitoring the log from an automation script, so I tackled this in another way that so far seems to be working quite well.

    The basic idea was instead of using the log as a signal, to use a UI element instead. I created a simple method in my application delegate that creates an empty label off screen and assigns it a known accessibilityIdentifier (the dispatch_after is not strictly necessary, but I wanted to make sure control had returned to the main event loop before capturing).

    - (void)signalScreenshot:(int)number {
        dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
            if ( ![[[NSProcessInfo processInfo] arguments] containsObject:@"automatedexecution"] ) {
                NSLog(@"Screenshot signaled, but ignoring because we are not in automated execution.");
                return;
            }
            UILabel* elem = [[UILabel alloc] initWithFrame:CGRectMake(-100, -100, 50, 50)];
            elem.accessibilityIdentifier = [NSString stringWithFormat:@"screenshotSignal_%d", number];
            NSLog(@"Signaling for screenshot %d...", number);
            [self.window insertSubview:elem atIndex:0];
        });
    }
    

    The existence of this invisible element in the view hierarchy acts as a signal to the automation script to snap a screenshot. Inside the automation script, I've written a captureScreen function that uses a self-incrementing counter to identify the screen number. There is a unique signal element for each incremental screen number.

    var screenshotCounter = 0;
    
    function captureScreen() {
        screenshotCounter++;
        var screenshotSignal = target.frontMostApp().mainWindow().elements().firstWithName("screenshotSignal_" + screenshotCounter);
        screenshotSignal.withValueForKey(1, "isVisible");
    
        if ( screenshotSignal.isValid() ) {
            UIALogger.logDebug( "Screenshot " + screenshotCounter + " signaled." );
            target.captureScreenWithName( "" + screenshotCounter );
            return true;
        } else {
            target.logElementTree();
            throw "Did not detect screenshot signal " + screenshotCounter;
        }
    }
    

    Now inside my script I can automate the app to get it to the correct place and call captureScreen(). It's then left to the app itself to signal when it is ready for the script to capture the screen by calling the signalScreenshot: method above. Works great, and no artificial delays!