Scheduling Events Not Blocked by User Interface on iOS

Just decided to make a quick post on this, because it had been causing me issues for a few hours now.

In iOS, we generally schedule a repeating timer as follows:

[NSTimer scheduledTimerWithTimeInterval: 0.5 target: self selector: @selector(update) userInfo: nil repeats: YES];

The problem with this is that, UI events (for example, scrolling a UIScrollView, and other animations) prevent the timer from firing. This is because scheduleTimerWithInterval places the timer event in the current run loop, generally NSDefaultRunLoopMode.

The following code will schedule the timer in all common modes, which include the NSDefaultRunLoopMode, and the UITrackingRunLoopMode (which becomes active during touch-tracking events).

NSTimer* timer = [NSTimer timerWithTimeInterval: 0.5 target: self selector: @selector(update) userInfo: nil repeats: YES];
[[NSRunLoop mainRunLoop] addTimer: timer forMode: NSRunLoopCommonModes];

This can be useful, for example, to keep OpenGL rendering when user interaction is occurring with UIKit elements.

Plotting Realtime Data Sets on iOS

While Quartz provides excellent drawing capabilities for making graphs and charts, it's generally not fast enough to plot real-time changing data sets. In this situation OpenGL can be used to draw the line segments.
To the UI-Elements library, I've added a simple class GLPlot that handles plotting of a constantly changing floating point data set using OpenGL. All you need to do is provide a pointer to the data buffer and then display the provided view in your application.


The following methods are provided by GLPlot:
- (id) init;
- (UIView*) view;
- (void) setDataBuffer:(float*)buffer;
- (void) setViewport:(CGRect)window;
- (void) plotFrom:(int)start to:(int)stop;
If for example, your program has a data buffer containing 1000 points, you would likely use the following code in a ViewController to initialize the GLPlot:
#import <UI-Elements/GLPlot.h>
...
@property (nonatomic, strong) GLPlot* plot;
@property (nonatomic) float* buf;
....
int numPoints = 1000;
self.buf = malloc(numPoints * 2 * sizeof(float));
self.plot = [[GLPlot alloc] init];
[self.plot setDataBuffer: self.buf];
The buffer should contains a series of vertices X1, Y1, X2, Y2, etc... Hence, a buffer containing 50 data points is described by 100 floating point numbers.
The visible data window (a.k.a. viewport) can be set by providing a CGRect of any size whose origin is the lower left point of the plot. Such is to say the following call...
[self.plot setViewport: CGRectMake(-300.0f, -100.0f, 600.0f, 200.0f)];
... will plot the data on a X axis from -300.0 to 300.0 and a Y axis from -100.0 to 100.0.
You must specify the range of vertices in the data buffer to be plotted. If possible, it would be best to set the range to cover only data points which are known to lie inside the viewport. Note: the range is in terms of vertices, not buffer indexes. So the following call...
[self.plot plotFrom: 100 to: 330];
... would indicate the 230 data points described from dataBuffer[200] and dataBuffer[659] should be plotted. If you instruct the GLPlot to plot data from memory which has not been properly allocated a segmentation fault is very likely to occur.
Finally, to display the plot in your application, you just need to insert the view provided by the GLPlot:
[self.view addSubview: [self.plot view]];
The view has a transparent background, so you can place it on top of whatever background you like (for example, gridlines). It is configured to update at 30 fps to reflect the state of the provided data buffer. The foreground color of the graph line is settable through ColorScheme, similarly to other classes in UI-Elements.

UI Elements for iOS

For work, I'm sometimes required to put together iPad apps as a user-friendly means of controlling and receiving data from our wireless telemetry devices.

Over time I've had to create a lot of useful custom UI elements, so I decided to start doing most of my UI work off the clock. That way I can use them in my own side-projects and publish them here!

A while ago, Bret Victor wrote an extremely insightful (as is all of his published work) article titled "Magic Ink." In it he described that often the clearest way to describe settings to the user is through a simple sentence.

A typical design would use a preference dialog or form that the user would manipulate to tell the software what to do. However, an information design approach starts with the converse—the software must explain to the user what it will do. It must graphically express the current configuration. 

For presenting abstract, non-comparative information such as this, an excellent graphical element is simply a concise sentence. 

enter image description here 

The user always sees the software presenting information, instead of herself instructing the software. If the information presented is wrong, the user corrects it in place. There is no “OK” or confirmation button—the sentence always represents the current configuration.

Understandably, he never made an implementation publicly available. I went ahead and rolled my own. It's published here. Colors can be customized. More types of options can be added.

enter image description here
An "option string" showing a
number of settings which can be
changed directly by the user.


enter image description here

User changing the sentence by picking from a set
of applicable words.

enter image description here
User entering updating an arbitrary number field
(keypad included).

Each modifiable part of the sentence is a subclass of OptionString, and is required to provide an object which will allow modification whenever it is tapped. More subclasses can be made providing dates, filenames, etc.

The other control I'm publishing today is simply a clone of android's in-app "toast" notification.

Removing project-specific dependencies is work, so there are only two controls available right now. More to come. Everything will be extracted into a single static library. Header files included. If you need help linking to a static library, call upon the infinite wisdom of Stack Overflow.

enter image description here

New Topics

Want to transition into writing on some new interests: audio/music algorithms, embedded systems, iOS development, and my work with the Center for Implantable Devices at Purdue.

I plan on publishing some small APIs for iOS, mostly providing user interface controls.

Projects put on hold:
  • Touch screen desk for education.
  • Oceania, although Ryan seems to still be working on it.

Projects abandoned:
  • Mobile games. I'm just not good at games.
  • All Android development, although Jason still does some.
  • Erlang web framework.

Also the name might change.

Plans for the Desk

EDIT AGAIN: Scratch the whole array idea entirely. I just found a method at NUI group that uses photoresistors (i.e. fast scanning), but only requires them along the edge. Even though they are on the edge there are no occlusion problems as one might expect because of some magic with polarizing film. Will post updates soon.

EDIT: Well it seems that the rise time on photoresistors is generally about 60 ms. This is a hit on the response time of the screen. Basically what that means is no matter how fast we sample the matrix, true response is limited to about 16 fps. So the goal is to either find a photoresistor which is cheap and has a rise time of ~10ms -> 100fps, or else think of a new sensing system.

First of all, we (or maybe just I) plan on finishing the first desk by the end of Christmas break, but I have some ideas for the next version of the desk that I'll talk about now. The major improvement I want to make is to the size. The first version of the desk is about a foot and a half deep because of the distance requirement of the camera from the screen. So the first step in making a thin desk is eliminating the camera from the design.

Obviously, we need some sort of visual sensor to replace the camera, so the option we will try is to make a large array of analog light sensors (photocell network) underneath the screen. A micro-controller will then read the value of each sensor sequentially and use an algorithm to deduce the location of blobs. The circuit underneath the screen will look something like this. To select a column, the MCU puts a voltage on one of the top pins, and a high impedance on the others. Then, to read a specific sensor, the MCU puts one of the row outputs through an analog-to-digital converter (using a multiplexer in between).


So far it seems that diodes are required on every photocell in order to prevent current from flowing through the other resistors. I simulated the circuit and it seems that the back-flow was somewhat negligible, especially in larger matrixes, but diodes are cheap, and precision is good, so I will keep them.

The other concern is the cost and difficulty of construction. For the first version of the desk I want to use a 32x24 matrix of photocells, so there are 768 cells that need to be purchased and soldered. So far the cheapest I have been able to find are $0.32 for large volume orders. I would be very glad to find one closer to $0.20, bringing the cost for all resistors down to $150. The diodes and resistors needed don't total to be more than $10-$15, so they aren't a problem. I was thinking I might be able to use my lab's rapid prototyping machine to fabricate the PCB and solder all of the components onto it.