Tuesday, November 1, 2011

Blog Paper #24 - Gesture avatar: a technique for operating mobile user interfaces using gestures

  • Paper: Gesture avatar: a technique for operating mobile user interfaces using gestures
  • Authors:
    • Hao Lu, Computer Sceince and Engineering at University of Washington
    • Yang Li with Google Research
    • Presented at the 23rd annual ACM symposium on User interface software and technology.

    • Hypothesis: Can a gesture-based method of selecting small items on a touch screen be feasible and user-friendly?
    • Methods: To test the hypothesis, Lu and Li implemented an optimized version of a previously released method called Shift. The application was written on the android phone and was tested by 20 participants. Lu and Li had the participants use both Shift and Gesture Avatar (10 using Shift first and 10 using GA first) using different sized targets, different numbers of targets, and sitting vs walking.
    • Results: It turned out that the Gesture Avatar was quicker than Shift in finding larger sized items, but slower with smaller items. With Shift, walking significantly increased the error rate, but walking had no effect on the Gesture Avatar's error rate. both of these results were consistent with their hypotheses. A strong majority of participants preferred Gesture Avatar over Shift. 

    Summary

         This article introduces Gesture Avatar, a method of drawing gestures on a smartphone that acts as a larger form of a button that might be too small to click on. This addresses the "fat-finger" problem and the finger occlusion problem, which can cause people to click on the wrong link on their phone. Lu and Li's solution is an updated version of Shift, which allows a user to draw a gesture in the shape of the first letter of the link, search for the appropriate link, and use the gesture as a representative of the button. The major improvements 
    made included adding a positioning algorithm that not only finds appropriate links, but the closest one as well, reducing the error. In the future, they plan to implement a way for the avatar to find moving objects.

    My View

         When I use an iPhone / iPod / Android / any small touchscreen,  I have my own solution to the "fat-fingered" problem. I zoom in on the link I want to click until I can't miss, and then I click on it. So initially, my thought was that I have no need for this because it really doesn't seem to make a difference in terms of either time or ease of strokes. But then I realized the other things it could be used for, especially selecting a particular letter when writing text. I tell you, there are not many things more frustrating than typing on an iPhone, especially dealing with typos. In addition, the added feature to catch moving objects really intrigues me, because if I zoom in on a moving object, it goes off the screen and I have to chase it. So while I'm not sure if I would ever use this for tiny links, I think they might be on to something here when it comes to usage on a smart phone in general.

    Saturday, September 24, 2011

    Paper Reading 12: Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection

    • Paper: Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection
    • Authors:
      • Li-Wei Chan
      • Hsiang-Tao Wu
      • Hui-Shan Kao
      • Ju-Chun Ko
      • Home-Ru Lin,
      • Mike Y. Chen
      • Jane Hsu
      • Yi-Ping Hun
      • All authors are published researchers at the Graduate Institute of Networking and Multimedia at National Taiwan university.
      • Presented at the 23rd annual ACM symposium on User interface software and technology.


      • Hypothesis: Could a "Beyond-the-Surface" application using programmable infrared cameras on a touch-screen tabletop be possible and useful?
      • Methods: To test the hypothesis, Chan et. al developed a prototype of a tabletop with a touch-screen interface, as well as invisible infrared "markers" under the screen that would display information when shined on by an infrared light. They also developed prototypes of the i-m-Lamp, i-m-Flashlight, and i-m-View as a means of displaying the infrared data. Using these prototypes, they conducted a basic study by asking users to use the products and explain their thoughts.
      • Results: Reviews of the product were mostly positive.  Most negative feedback was geared towards the i-m-View. Some of the issues identified included a "feeling of isolation", stating that using the handheld tablet cut them off from the table and other users. Also, some participants complained that they couldn't get the full 3D scope they desired on the i-m-View and the orientation couldn't be switched. 

      Summary

           This article introduces a touch-screen tabletop application that includes infrared "markers" beneath the surface that can be read by an infrared light. These markers seem to work in a way similar to that of the iPhone's MobileTag app, where you can access data by snapping a photo of a black and white patterned square. The authors provide three apparatus to provide the infrared light to read the data. The first is i-m-Lamp, which has the appearance of a desk lamp and is used for stable access to a section of the surface that a user might want to work with for a while. The i-m-Flashlight is similar to the i-m-Lamp, but used for quick mobility between markers. Finally, the i-m-View uses an infrared camera on the back of a computing tablet to display the hidden information on the tablet screen. the authors achieved mostly positive reviews on their work and plan to continue their research to address some of the issues brought up during their testing.

      My View

           This all looks very familiar. This looks very much like the technology used in the Nintendo 3DS. Granted, these papers were published before the 3DS release, but knowing about the 3DS kind of takes the "new and exciting" element out of this research for me.
           That being said, I am actually fairly impressed with what the authors have done with this technology. In particular, I like the versatility they have offered by creating three different products for different uses. You have the stable view from the i-m-Lamp, the versatility of the i-m-Flashlight, and the personalized viewing from the i-m-View. This product could be very useful in urban planning by viewing a map of an area and leaving notes under the various thumbtacks to be viewed through the markers. Also, this might be a stretch, but maybe this could be used for medical training purposes too, which could cut down on the need for cadavers.
           I'm not really sure where else the technology could go towards, but I feel like this is a good stepping stone into the future of tabletop applications.

      Paper Reading #11: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input

      • Paper: High-Precision Interaction with Back-Projected Floors Based on High-Resolution Multi-Touch Input
      • Authors:
        • Thomas Augsten
        • Konstantin Kaefer
        • RenĂ© Meusel
        • Caroline Fetzer
        • Dorian Kanitz
        • Thomas Stoff 
        • Torsten Becker
        • Christian Holz
        • Patrick Baudisc
        • All authors represent the Hasso Plattner Institute in Pottsdam, Germany, and with the exception of Becker, Holz, and Baudisc, this is their first publication.
        • Presented at the 23rd annual ACM symposium on User interface software and technology.


        • Hypothesis: Is a touch interface via a floor a viable user interface for large scale applications that cannot fit on a touch tabletop?
        • Methods: Augsten et. al developed a small prototype and ran several tests on it. First, they asked participants to "activate" two pretend buttons on a floor, while "not activating" two others to observe the most natural way of activating a button with their feet. Next, they conducted a study where they asked participants to identify when they were "pressing" smaller buttons to determine what the default "hotspot" of the foot should be. Lastly, they experimented with three different sized keyboards to see which was most popular in terms of ease of use.
        • Results: The first set of tests had extremely varied results, with strategies ranging from double-tapping vs walking around the button to hitting it with the right foot vs the left. However, simply walking was popular enough for "not-activation" to keep it the standard. The results of the second test were also varied, which led to the adoption of the calibration option for the foot's "hotspot". The third test, however, was nearly unanimous in favor of the large keyboard.

        Summary

             This article introduces the idea of a touch-based interface by means of your feet. The technology described here reads the sole of one's shoes to identify the user and, based on the pressure of certain areas of the sole, allows the user to interact with different applications. In addition, since the user is recognized by their footprint, if another user walks into their space, their work won't be affected. Examples of interactions include jumping to bring up a menu, tapping to activate a button, and shifting weight for head tracking. Austen et. al feel as though they have produced a successful product that will allow for much larger applications than a touch-based tabletop, and plan to continue their research to develop a larger prototype and investigate its usage towards "smart rooms".

        My View

             I'm reminded somewhat of that scene from "Big" where Tom Hanks is in the toy store and plays Chopsticks on a floor piano with the owner of the company he works for. But I digress...
             My first thought towards this application is "neat". It's really a good concept with a lot of interesting applications through it, and the authors obviously spent a great amount of time and effort into making this work through every aspect possible. In particular, I like the idea of using a user-calibrated hotspot on the foot to select commands or type on the keyboard because that frees up a lot of space that could be used for other applications and makes typing more manageable. This seems like a fun application and I could see some great recreational uses coming out of it.
             That being said, I do have a few concerns. Firstly, I mentioned that I could see this being for recreational purposes, but that's about it. I really cannot think of any other application that this might be used for by itself. The authors mention smart rooms as a future application, but that would have to be integrated with other technology as well to be practical, such as that used in the XBox Kinect. Also, I like how the authors addressed the issue of users changing shoes, but if they have two or more pairs of shoes that they use intermittently, do they have to select who they are every time they change shoes, or does the program still recognize them? Along that line, if the latter is true, if a certain pair of shoes is never used again, having that sole design still in the database seems like a bit of a memory leak to me. Also, I didn't really see a new user option, and one was never discussed in the paper.
             All in all, I am  much more fond of this application than some others that have been brought up in my readings, but I would love to know what exactly the authors plan on this being used for other than recreation, since I don't see myself typing a paper with my foot anytime soon.

        Thursday, September 15, 2011

        Paper Reading #8: Gesture search: a tool for fast mobile data access

        • Paper Gesture search: a tool for fast mobile data access
        • Presented at the 23rd annual ACM symposium on User interface software and technology.

        • Hypothesis: In searching for a contact, application, etc. on a device such as Android or iPhone, could using gesture recognition be a suitable alternative to typing letters in with the keyboard.
        • Methods: Li implemented an app for the Android that recognized gestures and used the user input to quickly locate contacts, apps, etc. on their phone. He then released it to the Android marketplace and took feedback from the first initial wave of users.
        • Results: Feedback was generally positive, stating that a vast majority of searches were easily done using three or less gestures. The only negative feedback was that people stated that they wouldn't need to use Gesture Search if the app they were looking for was on or near the home page.

        Summary

             Yang Li's paper introduces Gesture Search, an application that will allow the user to quickly find important items on their phone by gesturing the shapes of the letters on the screen. He starts off by describing the different functions of Gesture Search, such as displaying all possible results, even accounting for ambiguity (i.e. sometimes an "A" can look like an "H"). He discusses that his biggest challenge was configuring the touch recognition software to be able to tell the difference between letter gestures and routine taps and swipes. He solved this issue by adding functionality to study the "squareness" of the gesture. The more square a gesture was, the more likely it was a letter. He then tested with by releasing it as a beta and taking feedback through questionnaires. He states that the feedback was very positive, receiving very high score in the app ratings and can see this feature being implemented in the future.

        My View

             In my last blog, I discussed my negative feelings toward stroke based text entry, mainly because it's not a natural motion for us. The idea of Gesture Search, on the contrary, seems like a spectacular idea to me because the "strokes" I'd be inputting are the natural letters. While I have stated my content for the qwerty keyboard, I will admit that I have fat fingers and typing those tiny keys can be quite tedious. I also have a very large number of apps on my iPod Touch and an even larger number of contacts in my phone, so this method would prove extremely useful to me. Some may argue that the standard search would be just as quick if not quicker, but the way I understand it, these gestures can be done from anywhere, and it would take about as much time to make these gestures as it would just to get to the search screen.
             The only concern I have for this application was already brought up in the article, namely how to tell the difference between a routine tap or swipe and a gesture. I've actually thought of another idea for this problem and would love to hear my faithful readers' comments on it. What about keeping a small button in the top left corner of the screen where, when it is held down, the screen gets locked to everything except Gesture Search?
             All in all, my basic point of view on this subject is, when can I download this for my iPod?

        Wednesday, September 14, 2011

        Paper Reading #7: Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop

        • Paper: Performance Optimizations of Virtual Keyboards for Stroke-Based Text Entry on a Touch-Based Tabletop
        • Authors:
          • Jochen Rick - Published researcher for the Open University at the time of writing, currently a faculty member in the Department of Educational Technology at Saarland University.
        • Presented at the 23rd annual ACM symposium on User interface software and technology.

        • Hypothesis: Can a stroke based keyboard be developed for a touch-activated surface that would be efficient enough to replace the standard qwerty keyboard?
        • Methods: Rick had 8 participants perform a stroke gesture through four points at varying angles and destination sizes and measured the quickness at the various angles as well as segments in the strokes (beginning, middle, and end). The results were then inputted into an implementation of Fitts's Law to determine which keyboard layout would be best suited for the stroke method.
        • Results: In terms of speed and of improvement of stroking over tapping, the best results were those of the keyboards with the smallest space between key, such as the ATOMIKs, GAGs, and OPTIs. The wider keyboards, such as the qwerty and DVORAK keyboards, performed among the worst. Based on this data, Rick was then able two optimize two new "OSK" keyboards that outperformed all others.

        Summary

             Jochen Rick's article discusses the problems with the way we currently use keyboards on touch screen devices, namely a familiar qwerty keyboard with a touch screen interface. He argues that this is unnatural and that a more efficient solution can be found through stroking gestures. He uses a wide collection of previously created keyboard designs and calculates the most efficient keyboard layout using the methods described above. In his discussion, he brings up potential problems for implementing this keyboard interface into today's devices, such as the difficulty of getting people to adopt this new way of inputting words. He states, however, that if it is adopted, it could increase text input speeds by up to 50%.

        My View

             I remember we discussed this very issue in class the other day. My whole issue is that I use an iPod Touch and I'm perfectly fine with using the qwerty keyboard. It's what I've grown up with and what I'm comfortable with and not to say my typing is tremendously fast, but I can't see myself getting too much faster with a new key system.
             Rick does note in his conclusion that getting a wide enough range of people to accept this new keyboard system seems a bit impractical, and I have to agree with that. Most people use touch screens because they are convenient and fairly simple to operate, but mostly because the motions are natural to them. If you remember a couple blogs back, I absolutely geeked out at the idea of Manual Deskterity mainly because everything seemed to be activated by a fairly natural movement. To many of us, typing on a qwerty keyboard has become a natural motion for us, which is why I don't really foresee a future in stroke typing.
             I will say, however, that I fully respect the massive amount of work that went into this research. I never realized how much math could go into something as seemingly simple as designing a keyboard. I think Rick might be on the right track towards developing a widely accepted keyboard system for touch devices, but I don't think it's quite there yet.

        Ethnography - Week 0

        PRIOR PERCEPTION

                    For our ethnography this semester, my group has decided to study the life of professional video game players. We chose this group because despite being among the best of the best at what they do, they don’t appear to see a lot of recognition for their accomplishments. Through the course of this ethnography, we hope to gain a valuable understanding of the passion that drives professional gamers to keep doing what they do.
        Even though video games are an accepted part of today’s culture, there seems to be a common thought process that a person’s video game skills and social skills are inversely related. I will be the first one to disagree with that conjecture, as I enjoy playing video games myself. I would not, however, in any stretch of the imagination, call myself a professional gamer. In fact, I had no idea there existed such a thing as the MLG (Major League Gaming) until I joined this group.  I mainly play one-player story driven games, and maybe some sports games on-line (usually on the losing end). Therefore, I would love to see what goes through the minds of the people that are on the other end of my beatings.
        I am joined in this project by Neal, Andy, and Chandler. The four of us together have varied gaming experience in the sense of both how often we play and what kinds of games we play. This, I believe should be a very strong foundation for the research we are conducting, since, the way I see it, we all are coming in with different biases. Hopefully, this should help us to interpret our observations in such a way that all can agree with.
        To help kick off our research, we will be meeting with a friend of Neal’s who has climbed through the ranks to become a top tier Halo player. Our plan is to ask him some very basic questions to get a little bit of an insight into how he views the gaming world. The next report will come after we have spoken to him.

        INTITIAL RESULTS

        Today, we spoke with Tyler, who, as I stated earlier, has established himself as an extremely skilled Halo player, probably within the top 10% of the over 125,000 active Halo players. While this may make him the best player any of us know, he has mentioned that it’s probably in the top percentile that the recognized professional players lie. Nevertheless, he was more than willing to spend time with us and answer some of our questions. He has even offered to join us whenever we go to interact with other hardcore gamers.
        The first things we asked Tyler about were about his personal life in the gaming sense. He said that he has only been actively playing Halo for about two years, and that it was in fact Neal that talked him into playing it initially. Within a very short amount of time, Tyler had surpassed Neal’s skill and had moved on to “pwning n00bs” through Xbox Live. He stated that if he could, he’d play video games 40 – 50 hours a week (the same amount of time as a full time job), but has unfortunately not had a lot of time to play lately due to his studies.
        Next, we talked about official tournaments of the MLG. Although Tyler had never competed in an MLG tournament, he keeps up with the tournaments via internet videos, particularly Justin.tv. We asked him who he thought were the best Halo players in the world, and he mentioned RoyBorg and IGotYourPistola (who are both on the same team) and he also talked about Ninja, who’s around the top 25, so not as high as RoyBorg and IGotYourPistola, but still gets a lot of mention on Justin.tv. He said tournaments are usually played in a round-robin style, developing into a 16-team bracket (teams usually consisting of four players and a coach), where winners advance towards the finals while losers continue in “losers brackets” to determine their placing. Depending on the size, importance, and buy-in fee of the event, the winning team could get somewhere around $125,000, and even sponsorships.
        While our meeting with Tyler was very short, Neal, Andy, Chandler and I feel as though we have gained some very useful information for our study and are excited to continue. Future steps include checking out Justin.tv to see some of the pros in action and, to get a little more interaction with this group as a whole, seeking out and attending major gaming competitions, starting with a local Gears of War 2 tournament coming up this Saturday.
        More to come.

        Thursday, September 8, 2011

        Paper Reading #4 - Gestalt

        • Paper Title: Gestalt: Integrated Support for Implementation and Analysis in Machine Learning
        • Authors:
          • Kayur Patel, Naomi Bancrof, James Fogarty, and James A. Landa of Computer Science and Engineering DUB Group, University of Washington
          •  Steven M. Drucker of Microsoft Research
          •  Andrew J. K of Information School DUB Group, University of Washington
        • Presented at 23rd annual ACM symposium on User interface software and technology, New York, NY, 2010

        Summary

             This paper discusses Gestalt, a development environment for applications of machine learning (an AI process where a machine makes some kind of decision based on given data). Gestalt is designed to simplify the process of creating machine learning applications by giving the developer easy access to both the implementation and analysis sides of the development process. Patel et al. give two specific problems dealing with machine learning; sentiment analysis (a way of categorizing text) and gesture recognition (as with pen-touch input). To test the effectiveness of Gestalt, the researchers designed two applications, each implementing one of two given problems, and intentionally left bugs in them. They then had participants debug the applications using Gestalt and another "baseline" program and describe which method was preferred. The study showed a unanimous preference toward Gestalt, with participants particularly praising the ease of transferring between implementation and analysis.

        My View

             Since machine learning is not a field I am particularly familiar with, this paper was a bit more difficult for me to understand than some of the others I've written about. From what I have gathered, however, it appears as though this application is the Visual Studio, if you will, of machine learning applications. For those unfamiliar with Visual Studio, it is a programming interface that will assist you in building your program and offers powerful debugging tools  that will allow you to go through each and every step and view the changes.
             If I am correct in this comparison, I could see Gestalt being a very useful tool in building and debugging programs that involve machine learning. As I said, I am not experienced in creating machine learning applications, but I have felt the frustration of not being able to access all the information I feel I need to practically debug a program, and any interface that allows me to actively see more information gets a thumbs up from me.

        Tuesday, September 6, 2011

        Paper reading #3 - Pen + Touch = New Tools

        • Paper Title: Pen + Touch = New Tools
        • Authors: Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton, all members of the Microsoft Research Team.
        • Presented at 23rd annual ACM symposium on User interface software and technology, New York, NY, 2010

        Summary

             The words the authors use to sum up this idea are, "The pen writes, touch manipulates, and the combination of pen+touch yields new tools." This is an idea that Hinckley, et al are putting into practice through their development of Manual Deskterity, an application for the Microsoft Surface (which is essentially a touch-screen table) that allows writing with a pen, touch interface editing, and new tools through a combination of the two interfaces. To come up with how to create the features and how they should be used, they studied a group of people doing arts and crafts to observe their motions while working with different tools. Manual Deskterity has now been developed as, in its basic sense, a scrapbooking application, using common tasks, such as writing, cutting, drawing, using straight edges, stacking items, etc. as well as tasks only available on a computer, such as copy-pasting, zooming, and creating new objects out of thin air. they are not yet ready to to move this idea into production yet, and plan to continue research into how this technology can apply to other office interfaces, such as spreadsheets, word processors, etc.

        My View

             Let me put this simply. This is awesome! I'm not a scrapbooker myself, but I know people who are that would love this. I have actually seen something with the touch-screen table interface at Disneyland a few months back, but it didn't use the pen, and thus didn't involve nearly the number of tools that this does. And from what I have seen in the video posted above, all the movements seem very natural and easy to pick up.
             If I were to pick up any faults with this development, I would say that there might be too much stuff going on, and might cause confusion between features. For example, say you wanted to make little dot marks on a picture using the finger-painting technique, but ended up selecting and deselecting the photo in question over and over again. Also, experienced scrapbookers may not view this as a viable alternative to physical items, especially if they already have an existing collection they don't care to transfer.
             Despite these minor faults, I can foresee a vast potential for advancement of this technology. For example, one of the features they showed on the table at Disneyland was that you could place your smartphone on the table, shake it, and all your photos would appear on the table. Paired with Manual Deskterity, you could instantly start editing your photos, and once you're done, upload to Facebook. I could even see this going so far as being used to develop a website, which is scary for me as a web developer, because then I might be out of a job.
             All in all, this is an outstanding application and I cannot wait to see it go out onto the market.

        Paper Reading #2 - Hands-On Math

        • Paper Title: Hands-On Math
        • Authors: Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, and Hsu-Sheng Ko, all published researchers for Brown University
        • Presented at 23rd annual ACM symposium on User interface software and technology, New York, NY, 2010
        http://dl.acm.org/citation.cfm?doid=1866029.1866035



        Summary

             This article discusses the pros and cons of using basic paper and pen to compute mathematical problems versus using the assistance of a Computer Algebra System (CAS) to avoid making mistakes during evaluation, the main con for CAS being that most interfaces are limited or difficult to learn. Zeleznik, et al feel as though they may have found their solution with Hands-On Math, a system for creating an environment of virtual pages to be written on like paper while possessing the ability to perform step by step algebraic functions. The first part of this article explains the functionality of the device. Its features include the ability to create, move, and delete pages, fold pages over to temporarily hide content, perform algebraic functions such as factoring, and much more, all using pen and touch interfaces. At the time of this paper, they had an alpha version of a working prototype that they tested with fellow students at Brown University and collected general feedback. Overall, their work was praised as potential for a very viable tool in computing mathematical problems. The main problems that were brought up included difficulty learning a particular feature or something not feeling "natural". Zeleznik, et al will continue their work to optimize their features, and potentially allow for more complicated math to be computed as well.

        My View


             In this age of smartphones and Angry Birds, where touch and stylus interfaces are all the rage, it is always good to see more useful applications come out using the technology. I believe Zeleznik, et al could potentially have a working product here, albeit a few concerns from the testers. The question is, would people actually use this? It seems to me that the ones who would get the most use out of this device are students taking the particular subject this device assists with, since most professionals who see this math on a daily basis are pretty comfortable with it already and can simply use a much cheaper graphing calculator. However, as a student myself, I can honestly say that I don't see many professors allowing this kind of technology to be used in their classes, as the best way to learn is through mistakes, and inhibiting the ability to make mistakes could, in turn, cause more mistakes down the road in cases that this technology is not available.
             Now, do not get me wrong. I really do see potential in this device, particularly if computations are advanced to the calculus or differential equations level, since I know that there are several calculations in those levels that just cannot be done by hand. Many engineering student here at Texas A&M are required to use such a program to perform these high level calculations called Maple, but is not very widely praised, from what I have heard from my peers, as well as my personal opinion. Having Hands-On Math as a viable alternative sounds extremely beneficial. In addition, I could see this product as a valuable tool for researchers to help them develop proofs for their ideas.
             All in all, I see Hands-On Math as potentially a powerful innovation, assuming its computational power goes beyond that of a standard TI-89 graphing calculator.

        Thursday, September 1, 2011

        Introduction Blog Assignment # -1

        Name: Zac Casull
        6th Year Senior
        casull88@gmail.com



        Howdy! My name is Zachary Casull and I'm a proud member of the Fightin' Texas Aggie class of 2010! I'm taking this class because I hope one day to be a game designer or software engineer, and I feel learning this subject will help me to understand how to create a successful user interface. I bring to this class the programming experience I have achieved through my other computer science classes at Texas A&M as well as my job as a web developer. I feel that the next big breakthrough in computer science will be a continuance of the technology that went into the XBox Kinect that will allow any computer program or simulation to be run through human gestures. If I could travel back in time, I would really like to meet a caveman, because I think it would be interesting to see where we have evolved from. My favorite shoes are tennis shoes because they take you everywhere you need to go and still look good. In a computer science field, I think it would be highly beneficial to be fluent in Japanese since much of our technology is shared with Japan.