One of the key hurdles that developers face when transitioning from Silverlight to Silverlight for Windows Phone is learning how to build touch interfaces. While the desktop versions of Silverlight do support low-level touch events, the vast majority of desktop applications eschew touch input and rely heavily on mouse input instead. Building great touch interfaces into your phone apps requires a level of understanding that goes far beyond what you already know about mouse interfaces, especially if you want to support multi-touch – actions involving two or more fingers in contact with the screen at the same time. A quick search of published literature reveals that quality information regarding sophisticated touch interfaces is lacking, so I decided to write a series of articles about touch input in Silverlight for Windows Phone. This is the first in that series.
For starters, there are several ways you can process touch input in a phone application. The first and simplest way to go about it is to ignore touch events and use mouse events. This approach has the following advantages:
- You can implement a touch interface using a paradigm you’re already familiar with (mouse events)
- The same code works for mouse input in a desktop app and touch input for a phone app
- You can test your touch logic in the Windows phone emulator using the mouse
It also comes with a few disadvantages:
- Using mouse events to process touch input restricts you to single-touch interfaces only
- There is no built-in support for inertia or gestures; if you wish to differentiate between, for example, taps and flicks, or you want to support pinch gestures for zooming, you’re on your own
- Using mouse events is the least performant way to respond to touch input
How is it possible to use mouse events to process touch input when a phone has no mouse attached? Simple. Silverlight for Windows Phone promotes touch events to mouse events unless explicitly instructed not to. A touch event indicating that a finger has touched the screen becomes a MouseLeftButtonDown event. Similarly, touch events indicating that a finger has left the screen or is moving across the screen are promoted to MouseLeftButtonUp and MouseMove events, respectively. Silverlight for Windows Phone also fires MouseEnter and MouseLeave events when a finger enters or leaves a XAML object.
Saying that touch events are promoted to mouse events isn’t entirely accurate – at least, it’s not the whole story. In reality, only primary touch events are promoted to mouse events. Primary touch events are ones involving the first finger that touches the screen. So, if you place one finger on the screen, move it around, and then lift it up, all the touch events generated by those actions are promoted to mouse events. However, let’s say you put one finger on the screen, and then a second, and then move both fingers around the screen at the same time. Only the touch events generated by the first finger are promoted to mouse events, because that finger represents the primary touch point. That’s why you can’t build multi-touch interfaces using mouse events: the user may have multiple fingers touching the screen, but only the events emanating from one of those fingers get promoted to mouse events.
With that in mind, let’s start with a simple example that demonstrates how to change the color of a rectangle when touched:
// MainPage.xaml
<Rectangle x:Name="Rect" Width="300" Height="200" Fill="Red"
MouseLeftButtonDown="OnRectangleClicked" />
// MainPage.xaml.cs
void OnRectangleClicked(object sender, RoutedEventArgs e)
{
Rect.Fill = new SolidColorBrush(Colors.Blue);
}
In a desktop Silverlight application, this changes the rectangle’s color when the rectangle is clicked with the mouse. In a phone application, the same code changes the rectangle’s color when the rectangle is touched with a finger. (No, you can’t tell which finger was used, or even that it really was a finger and not, say, a toe. <g>)
You can test this code on a phone, or you can test it in the Windows phone emulator. Regardless of how you test it, you should find that when you touch the red rectangle with a finger, it immediately changes to blue.
Now let’s look at a more sophisticated example. The next example allows the user to drag a couple of rectangles around the screen with a finger. Each rectangle turns to yellow when the finger goes down, and reverts to its original color (red or blue) when the finger is lifted:
// MainPage.xaml
<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
<Rectangle Width="100" Height="100" Fill="Red"
MouseLeftButtonDown="OnMouseLeftButtonDown"
MouseMove="OnMouseMove"
MouseLeftButtonUp="OnMouseLeftButtonUp">
<Rectangle.RenderTransform>
<TranslateTransform x:Name="RedTransform" Y="-100" />
</Rectangle.RenderTransform>
</Rectangle>
<Rectangle Width="100" Height="100" Fill="Blue"
MouseLeftButtonDown="OnMouseLeftButtonDown"
MouseMove="OnMouseMove"
MouseLeftButtonUp="OnMouseLeftButtonUp">
<Rectangle.RenderTransform>
<TranslateTransform x:Name="BlueTransform" Y="100" />
</Rectangle.RenderTransform>
</Rectangle>
</Grid>
// MainPage.xaml.cs
public partial class MainPage : PhoneApplicationPage
{
private Brush _color;
private bool _dragging = false;
private double _x, _y;
private Point _start;
// Constructor
public MainPage()
{
InitializeComponent();
}
private void OnMouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
Rectangle rect = sender as Rectangle;
TranslateTransform transform = rect.RenderTransform as TranslateTransform;
// Change the rectangle’s fill color to yellow
_color = rect.Fill;
rect.Fill = new SolidColorBrush(Colors.Yellow);
// Save the X and Y properties of the transform
_x = transform.X;
_y = transform.Y;
// Save the starting position of the cursor
_start = e.GetPosition(null);
// Capture the mouse
rect.CaptureMouse();
_dragging = true;
}
private void OnMouseMove(object sender, MouseEventArgs e)
{
if (_dragging)
{
Rectangle rect = sender as Rectangle;
TranslateTransform transform =
rect.RenderTransform as TranslateTransform;
// Get the current position of the cursor
Point pos = e.GetPosition(null);
// Compute the offset from the starting position
double dx = pos.X – _start.X;
double dy = pos.Y – _start.Y;
// Apply the deltas to the transform
transform.X = _x + dx;
transform.Y = _y + dy;
}
}
private void OnMouseLeftButtonUp(object sender, MouseButtonEventArgs e)
{
Rectangle rect = sender as Rectangle;
// Restore the rectangle’s original fill color
if (_color != null)
{
rect.Fill = _color;
_color = null;
}
// Release the mouse
rect.ReleaseMouseCapture();
_dragging = false;
}
}
The exact same code will compile and run just fine in a desktop app as well. But on a phone, you’ll find that it provides a reasonably nimble touch interface.
Note that when a rectangle is touched, the code above doesn’t store the rectangle’s original color in the rectangle itself. Instead, it copies it into a private field named _color. This would pose a problem if interfaces based on mouse events supported multi-touch (opening up the possibility that both rectangles could be moved at once using two fingers), but since mouse-based interfaces support single-touch only, there’s no possibility that the two rectangles can be moved concurrently. Try it: use your index finger to move one of the rectangles, and without lifting your finger, try to move the other rectangle with another finger. It doesn’t work.
If you want to move both rectangles at once, you need multi-touch. And that’s the subject of the next article.