In Part 1 and Part 2 of this series, I described how to build touch interfaces for phone apps using mouse events and Touch.FrameReported events. Part 3 presents yet another way to respond to touch input: manipulation events.
Manipulation events originated in WPF, and they’re substantially richer in WPF than in Silverlight for Windows Phone. Charles Petzold has ably documented the differences in an article of his own. Still, even in their somewhat limited form, manipulation events can be useful in certain scenarios – particularly scenarios involving simple one-finger dragging or panning or, to a lesser extent, scenarios that involve two-finger pinching (typically used for zooming). One of the nice features of manipulation events is that if the fingers are still moving when they leave the screen, the completion events are accompanied by velocity information that can be used to simulate inertia. For example, if a series of manipulation events indicating that a finger has dragged across the screen ends with a non-zero velocity, you can use that velocity in an animation to effect a flick rather than a drag.
As an introduction to manipulation events, consider the following code sample, which displays a simple rectangle that the user can move with a finger. When the rectangle is touched, it turns yellow, and when it’s released, it reverts back to red:
// MainPage.xaml
<Rectangle Width=”100″ Height=”100″ Fill=”Red”
ManipulationStarted=”OnManipulationStarted”
ManipulationDelta=”OnManipulationDelta”
ManipulationCompleted=”OnManipulationCompleted”>
<Rectangle.RenderTransform>
<TranslateTransform />
</Rectangle.RenderTransform>
</Rectangle>
// MainPage.xaml.cs
private void OnManipulationStarted(object sender, ManipulationStartedEventArgs e)
{
// Save the rectangle’s old fill color, and then change it to yellow
Rectangle rect = sender as Rectangle;
rect.Tag = rect.Fill;
rect.Fill = new SolidColorBrush(Colors.Yellow);
}
private void OnManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
Rectangle rect = sender as Rectangle;
TranslateTransform transform = rect.RenderTransform as TranslateTransform;
// Move the rectangle
transform.X += e.DeltaManipulation.Translation.X;
transform.Y += e.DeltaManipulation.Translation.Y;
}
private void OnManipulationCompleted(object sender, ManipulationCompletedEventArgs e)
{
// Restore the rectangle’s original fill color
Rectangle rect = sender as Rectangle;
rect.Fill = rect.Tag as Brush;
}
Unlike Touch.FrameReported events, which fire at the application level, manipulation events – ManipulationStarted, ManipulationDelta, and ManipulationCompleted – are fired by individual UI elements. As a finger moves across the screen, ManipulationDelta events fire and convey information about that movement in properties named DeltaManipulation and CumulativeManipulation. The former indicates how much movement has occurred in the X and Y directions since the last DeltaManipulation event, while the latter reveals how much the finger has moved since the operation begun (since the ManipulationStarted event fired). You can also read ManipulationDeltaEventArgs’ ManipulationOrigin property to get the starting point for the manipulation.
One piece of bad news is that you can’t use manipulation events to move two UI elements independently. Once a UI element such as a rectangle fires a ManipulationStarted event, no other UI elements will fire manipulation events until the one that is firing them fires a ManipulationCompleted event. Of course, you can always fall back to Touch.FrameReported events if you need simultaneous motion.
On the positive side, manipulation events include inertia data. In the example above, the rectangle stops moving the moment the finger leaves the screen, even if the finger was still moving when it lifted. If you look up ManipulationCompletedEventArgs in the documentation, you’ll find that it contains properties named IsInertial and FinalVelocities. (ManipulationDeltaEventArgs also has a property named Velocities, but the velocities exposed through it are always 0.) FinalVelocities, in turn, has a property named LinearVelocity. If the finger was still moving when it broke contact with the screen, IsInertial will be true. In addition, LinearVelocity.X and LinearVelocity.Y will let you know how fast the finger was moving. The values exposed here are typically pretty large – on the order of 3,000 or 4,000 or so if you flick the screen quickly.
The next example demonstrates how to use these velocities in combination with animation and animation easing to add a nice touch to panning gestures. The application displays a 2048×480 panoramic image stitched together from several smaller photos I snapped in England last year. Only an 800×480 portion of the image is visible at any given time, so the application uses ManipulationDelta events to pan the image. Furthermore, when a ManipulationCompleted event fires, the application launches an animation that continues the panning motion if IsInertial is true. Finally, it uses a CircleEase to decelerate the animation and lend the whole affair a more realistic feel. Here’s the application running in the emulator:
And here’s the code that makes it work. Note that GPU acceleration is enabled for the image to make panning and animating as smooth as possible:
// MainPage.xaml
<Grid x:Name=”ContentPanel” Width=”2048″ Height=”480″>
<Image Source=”Stonehenge.jpg” Width=”2048″ Height=”480″ CacheMode=”BitmapCache”
ManipulationDelta=”OnManipulationDelta”
ManipulationCompleted=”OnManipulationCompleted”>
<Image.RenderTransform>
<TranslateTransform x:Name=”PanTransform”/>
</Image.RenderTransform>
<Image.Resources>
<Storyboard x:Name=”Pan”>
<DoubleAnimation x:Name=”PanAnimation”
Storyboard.TargetName=”PanTransform”
Storyboard.TargetProperty=”X” Duration=”0:0:1″>
<DoubleAnimation.EasingFunction>
<CircleEase EasingMode=”EaseOut” />
</DoubleAnimation.EasingFunction>
</DoubleAnimation>
</Storyboard>
</Image.Resources>
</Image>
</Grid>
// MainPage.xaml.cs
private void OnManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
// First make sure we’re translating and not scaling (one finger vs. two)
if (e.DeltaManipulation.Scale.X == 0.0 && e.DeltaManipulation.Scale.Y == 0.0)
{
Image photo = sender as Image;
TranslateTransform transform = photo.RenderTransform as TranslateTransform;
// Compute the new X component of the transform
double x = transform.X + e.DeltaManipulation.Translation.X;
if (x > 0.0)
x = 0.0;
else if (x < Application.Current.Host.Content.ActualHeight – photo.ActualWidth)
x = Application.Current.Host.Content.ActualHeight – photo.ActualWidth;
// Apply the computed value to the transform
transform.X = x;
}
}
private void OnManipulationCompleted(object sender, ManipulationCompletedEventArgs e)
{
if (e.IsInertial)
{
Image photo = sender as Image;
// Compute the inertial distance to travel
double dx = e.FinalVelocities.LinearVelocity.X / 10.0;
TranslateTransform transform = photo.RenderTransform as TranslateTransform;
double x = transform.X + dx;
if (x > 0.0)
x = 0.0;
else if (x < Application.Current.Host.Content.ActualHeight – photo.ActualWidth)
x = Application.Current.Host.Content.ActualHeight – photo.ActualWidth;
// Apply the computed value to the animation
PanAnimation.To = x;
// Trigger the animation
Pan.Begin();
}
}
In addition to providing reasonable support for one-finger drag and flick operations, the manipulation events also support “pinch” gestures – two fingers on the screen moving together or apart. Phone applications often use these gestures to zoom in and out. Pinch-zooming is a standard of sorts and is used on a variety of mobile platforms, including the iPhone, Android phones, and, of course, Windows phones.
The next example demonstrates how to use manipulation events to implement interactive zoom in a phone app. The key to pinch-zooms is the Scale property exposed through ManipulationDeltaEventArgs.ManipulationDelta. If only one finger is touching the screen, Scale will contain zeroes but ManipulationDelta.Translation will contain non-zero values. But if two fingers are touching the screen, the roles are reversed: Scale will contain non-zero values and Translation will contain zeroes. This application responds to non-zero values of Scale.X and Scale.Y by manipulating a ScaleTransform that in turn scales a XAML penguin. The result? Make a pinching motion with the fingers moving apart, and the penguin zooms in. But make a pinching motion with the fingers moving together, and the penguin zooms out. Here’s the output:
And here’s the code:
// MainPage.xaml
<Grid x:Name=”LayoutRoot” Background=”#FF101010″ ManipulationDelta=”OnManipulationDelta”>
.
.
.
<Grid x:Name=”ContentPanel” Grid.Row=”1″ Margin=”12,0,12,0″>
<Canvas x:Name=”PenguinCanvas” Width=”340″ Height=”322″
RenderTransformOrigin=”0.5,0.5″>
<Canvas.RenderTransform>
<ScaleTransform x:Name=”PenguinTransform” />
</Canvas.RenderTransform>
<Ellipse Fill=”#FF050505″ Stroke=”#FF000000″ x:Name=”OuterBody”
Width=”243″ Height=”286″ Canvas.Left=”46″ Canvas.Top=”21″ />
.
.
.
</Canvas>
</Grid>
</Grid>
// MainPage.xaml.cs
private void OnManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
if (e.DeltaManipulation.Scale.X > 0.0 && e.DeltaManipulation.Scale.Y > 0.0)
{
// Scale in the X direction
double tmp = PenguinTransform.ScaleX * e.DeltaManipulation.Scale.X;
if (tmp < 1.0)
tmp = 1.0;
else if (tmp > 4.0)
tmp = 4.0;
PenguinTransform.ScaleX = tmp;
// Scale in the Y direction
tmp = PenguinTransform.ScaleY * e.DeltaManipulation.Scale.Y;
if (tmp < 1.0)
tmp = 1.0;
else if (tmp > 4.0)
tmp = 4.0;
PenguinTransform.ScaleY = tmp;
}
}
It looks reasonable from the outside, but when you try it, you’ll encounter a couple of quirks that you probably won’t like. First, the penguin scales independently in the X and Y directions. Typically you want to scale uniformly when zooming in or out, but manipulation events make this practically impossible. Second, if you rotate the phone to landscape mode, you’ll find that X and Y are reversed; pinching the fingers together horizontally reduces the penguin’s height, while pinching them together vertically reduces the width. Obviously we could remedy this with a bit of code that factors in the current page orientation, but there’s little reason to do so given that there’s a much better way to respond to pinch gestures and other types of gestures, too, in Silverlight for Windows Phone. That better way is the subject of the next article.
I didn’t take advantage of it in this sample, but be aware that you can also add inertia to pinch gestures. When a ManipulationCompleted event fires at the end of a pinch gesture, the ManipulationCompletedEventArgs.FinalVelocities.ExpansionVelocities property contains X and Y inertia data.
In summary, manipulation events are great for implementing simple dragging and panning operations, but less useful – in my opinion – for zooming. But don’t despair. The fourth and final article in this series will cap things off by describing how to use the Silverlight for Windows Phone Toolkit to add rich gesture support to a phone application. Meanwhile, if you’d like to play with the samples presented in this article, you can download them from Wintellect’s Web site. Obviously, you’ll need to have a phone to run the samples on. (The first and second examples can run in the emulator, but performance is much better on a real device. The third example only works in the emulator if you have a multi-touch screen.) In addition, you’ll need to be registered as a Windows phone developer and the phone will have to be unlocked so you can push the samples out to it.