For my Interactive Media Capstone project, I explored Interactive Media Applications in Foreign Language Acquisition.
As an Undergraduate Spanish minor, I’ve always wondered if new technology could change the ways that we learn languages.
Research suggests that the advent of technology changes the way that we learn new things. For high school and college students, available foreign language course materials and resources have remained mainly the same for several years. This project explores how various forms of Interactive Media can is applied to foreign language education, and offers an example of a supplementary language-learning tool that uses Augmented Reality.
You can read my full literature review at finalmanuscript.
Understanding the User
Prior to the development of an interface, I needed to figure out what my application would do and who my audience would be. Only then would I understand how to maximize their benefit from my app. In my literature review, I focus heavily on Foreign Language Acquisition in the Classroom. I decided to focus my efforts on young teachers and college-aged students. Next, I spent my time taking stock of the audiences, potential considerations. I developed a User Persona to profile an example of a user that would benefit from my research.
Following the development of the User Persona, I conducted interviews with Jose Bravo de Rueda, a Foreign Language Professor and Kayla Fisher, a Foreign Language Student. I learned their respective ideas and expectations of a foreign language learning app.
I brainstormed ideas of potential tools or interfaces that would be beneficial to a user with similar to Kay. The main points that I gathered were:
- The application would not have a lengthy delayed-gratification model for the benefits that the User would receive
- A more useful application is designed to supplement omnipresent tools and applications
- It would need to be mobile, as it would be useful in ‘on-the-fly’ circumstances
- The application should be able to benefit the user in everyday situations (vocabulary being particularly useful)
Kay’s personal interests and everyday nuances provided me with a more focused perspective of a user’s everyday life. Using more intricate pieces of information obtained from the interviews that may be useful for my potential users.
- The application would have a limited sign-up or sign-in interface
- The limited sign-in interface would limit the amount of the user’s data that the application collects
- Augmented or Virtual objects need to be used in some way
- Simple/ Self Explanatory UI
- User-Guided/ User-Led Interaction
From Concept to Design
Based on the User Persona and information from my literature review, I believed I now had enough information to begin sketching and storyboarding the application experience. In order to get a better picture of what I was beginning to create, I had to identify what tasks the user would want to complete. After taking inventory, I could thoroughly plan how to provide a seamless task completion experience.
I would also have to identify potential pitfalls and do some light user testing before I begin the higher-fidelity designs and application development. Bearing in mind that the application would feature vocabulary, I thought of an AR vocabulary application.
With Google Cardboard in mind, I created a concise application map to show how the user will get into the interface.
Next, I created a low-fidelity wireframe of the landing page. Here, the user will be able to view and switch native and target languages and toggle into the interface.
Finally, I ‘storyboarded’ the application by using a sample image to outline the application’s function. Note that the application recognizes the object and provides the native word banana and the Spanish translation.
After researching development tools for Image and Object recognition, I decided to use Cordova and Wikitude in order to develop the application. For exhibition purposes, my application would serve as a prototype for a Natively developed iOS and Android application. The actual application would use machine learning and a Google Translate or Microsoft Translate API in order to display translated words over an object.
After gaining an understanding of the constraints that Cordova and Wikitude provided, I began to design the application.
During the development process, I realized that maintaining the contrast between the overlaying text and its image target could be difficult depending on the setting.
In order to combat this, the overlay will include a rectangular black background with the world appearing over it in white text with tall and thinner characters to improve readability.
Furthermore, I decided the best image overlay would display the word in the target language while also providing the user with audio of the word being pronounced.
After ensuring that the application was viable, I began to style the application and accompanying media. I wanted the interface and the accompanying informative website to have an easily digestible white background with matte variants of the reds and blues present in the flags of many Spanish-speaking countries.
The decision to tone down the colors was made following a critique from prospective users that vibrant colors may take the user’s attention away from the content.
In the spirit of the projects academic roots, I refrained from using fonts that could potentially alter the tone of the content opting for the ever-so-reliable and mobile-friendly Helvetica with Avenir. Avenir possesses minimally varying font weights that are suitable for heavy passage reading.