Skip to content

Give a Hand, or the Finger, to Techology.

One day, while shuffling between screens in Safari, the last tab popped into its own shell in the corner of my screen. Because this had never happened before, I hastily shut the new window, fearing a viral infestation that would steal my life savings, edit my LinkedIn profile, or browse photos of my cats. 

When it happened again, I realized the wriggling of my fingers in the vicinity of the track pad was the cause. My laptop was translating random, meaningless hand movements into instructions for my tech. 

It’s sort of miraculous that we communicate with tech through our fingers, making it an extension of our thougths and actions. In the beginning of personal ICT devices, there was concern whether humans would adapt to navigating around the screen with a remote appendage (aka, a mouse). Surprisingly, this wasn’t difficult.1 The technological challenge proved to be engineering the mouse to glide smoothly enough to keep up with the hand motion.2

And so was born the realm of hand gestures, as they relate to relaying instructions to tech. Some were intuitive (pointing at something to select it – this happens on the screen, at the meat counter, or signalling a turn) and some learned (double clicking to open something – works on electronic files, not cans of soup, doors, or your landlord’s heart). 

Fast forward to now, smart phones and laptops provide 10 to 20 square inches of touchable real estate where we can directly communicate instructions to our tech. However, the meaning of these finger-based instructions is engineered by the manufacturers of the devices. Communication may be a two way interchange, but this language is one users must learn. 

Apologies to those computer scientists and others who have studied human gestures to understand how to transform what comes naturally to humans into an intuitive way to communicate with machines. My user perspective is a little frustrated when my tech does something random and unwanted.

First, let’s celebrate the successes3:

  • pinching index finger and thumb together to make an image smaller, or moving them apart to make it bigger – makes perfect sense.
  • making a swiping motion to turn pages, advance to the next screen, no longer see uninteresting pages. There is a reason ‘turn the page’ is a common expression for moving on.
  • rotate by rotating your wrist with the fingers on the screen/track pad – got it, my wrist was designed by evolution for turning things in my grasp.
  • pointing at things or tapping on them to select them.

The key may be finding an easily translatable concept. Like going home. While not a hand gesture that I know of, going home, or back to the main menu, feels natural. Starting over, starting again, going back to the beginning. It works.

Hand motions that don’t work for me:

  • anything where the difference is two, three or four fingers on the screen/trackpad. Maybe this works for guitar players, but not intuitive otherwise.
  • ones that are subtle variations of common ones. For example, a slight downward trend to a swipe left pulls up a special screen. I wanted to go to the next screen but the angle of the phone was a little off, someone called my name as I was swiping or I just wasn’t paying enough attention.
  • ones that I wouldn’t have guessed existed unless I read the manual (or a forum). It’s astonishing what you find when you google – who knew the way to shut down a laptop was, ironically, to press and hold the on button? 

As tech evolves into wearables and immersive experiences, will hand gesture communication get more complex or more intuitive? Recent reports on Meta’s addition of an accurate device to track hand motions, used in virtual reality interfaces, got me wondering. It’s easy to see how this would be a great addition if you were gaming, and using your hand motions to shoot at the enemy or if the VR was a training simulation that required manual dexterity, like plumbing or surgery. 

It’s not clear how everyday communication tech will integrate virtual and augmented reality (at least to me). Perhaps someday we will no longer have a separate physical device to send and receive messages from our loved ones, coworkers, and service provides or to look up information, make arrangements, and buy stuff. Instead this may evolve into to a chip embedded in our bodies, or a discrete wearable, like fashion glasses or a tiny earbud. 

There are all sorts of possibilities for how we could interact with future tech to let it know what we’re interested in, want to ignore, or buy, when the price comes down. Hand gestures may only be part of it. Body gestures could be adapted. Imagine blinking to select something, then having a bit of mascara fall into your eye while shopping and accidentally ordering 15 bunches of celery. 

I hope learnings from the use of external tech will translate into more effective hand gestures in the future: avoid anything that requires hours of practice to learn, and make instructions unique enough that they won’t happen by accident. Otherwise, I got a message for my tech when it’s doing something I don’t want it to. It looks like my middle finger stuck in the air. 🖕

1 Neurobiologists and other experts probably have a good explanation, but I’m looking at this from a tech development perspective. 

2 This is my interpretation from a “60 Minutes” interview of David Kelley, a creative entrepreneur who designed the first mouse for Apple. 

3 I should say that my user experiences are primarily on Apple devices, so they may not be the same for other interfaces.

Thanks for reading.

If you'd like more,

sign up to receive awesome content in your inbox, every week-ish.

Signing up is only for updates when new blog posts are added to my site. If you want marketing spam, you'll have to look elsewhere, but, really, anywhere will likely do 😎

Share this post, if you like...

Leave a Reply

Your email address will not be published. Required fields are marked *