So, by the title you can tell how surprising it is that we're near the end of term already. It really just feels like yesterday when we starting discussing interactive designs and user experience. I still remember some of the really cool example that Velian talked about in class. How we all went from calling it 'sickkk' to analysing its efficiency and aesthetics.
We definitely covered a lot of material that I wasn't really expecting just by the name of this course. The few outlines we had as goals for this course included coming up with creative solutions to design problems, learning to understand and anticipate user needs and of course critically evaluating user experience.
Assignment 1 allowed us fresh insight into what goes on behind the scenes in trying to design an efficient product. It allowed us to be creative and innovative with Project Jacquard. It really helped integrate our learning material into practical use.
Furthermore, assignment 2 really taught us to critically evaluate a technical website like MarkUs. It was really interesting how we had already sort of learnt to look at things from a technical perspective. It went from viewing MarkUs just as a simple tool to fully comprehending how much detail it requires to build a fully functioning system like it. From catching certain glitches in design to being critical of its implementation designs it was definitely a learning experience.
One of the most helpful things throughout the semester was definitely the research based lectures. I personally find myself definitely tilting toward research in the future and while there was definitely certain information like Week 2 and the basics that was a little repetitive from other Cognitive Science and Psychology course where I've already done research before, it was nonetheless still informative. I particularly liked discussing research methods and very clear cut instructions on how to structure it. Its usually not this detailed in most courses and this was an eye-opener. It allowed me to take things one step at a time and evaluate each step before moving on.
The overall project for this course is of course a big part of that learning experience. Writing research essays for other courses is always interesting but actually formulating our own problem space in detail and then step by step analysing the solution for it was far more challenging and satisfying to complete.
The lecture for Usability and Goodwill were really interesting too since some of the things seem obvious enough but considering there are still are some awful designs out there it made for an interesting lecture.
Like mentioned earlier, there was a lot of overlap with other cognitive psychology courses and some of the experiments and models we discussed in class definitely seemed a little redundant to me, but I could appreciate seeing them in a new context and light especially relating it in detail to technology and how some of these researches help us design better products.
Last but not the least, I have to appreciate these blog posts. When Velian first mentioned it, I was rolling my eyes, expecting the blogs that we had to do in 1st and 2nd year for some of the programming courses where it was solely focused on discussing class material. That can get tedious and mostly just feels like work. This however was a brilliant idea and honestly so much fun.
It was really really cool how it was basically effortless especially with facebook and youtube always having so many innovative new projects being publicized. It was very informative and educational trying to evaluate them with a critical eye now.
All in all, I'd 100% recommend this course especially for anyone interested in HCI or even Cognitive Science. Shoutout to all the TAs and Velian for doing a great job!
Sunday, 2 April 2017
Monday, 20 February 2017
Tech for helping Chronic Diseases
This week I'd like to focus on an amazing innovation that helps people with chronic diseases like Parkinson's, Huntington's, cerebral palsy and mobility issues like tremors or limited mobility. It is a special spoon produced by the company liftware that is aimed to increase the quality life of those suffering from postural tremors in their upper limbs.
Without further ado lets delve into its design and specifics.
The interactive design for this device is really simple and effective. It is basically a spoon (or a fork) that allows you to eat without a lot of help. People with hand/arm tremors find basic day to day activities quite hard and it usually affects not just their quality of life but also their confidence. Think about going to a restaurant, eating soups and things like ramen that need precise movements even by those with practised hands. However for people with severe conditions of tremor this means extremely slow eating, eating with assistance and sometimes preferring not to eat certain kinds of food! Doing something as simple as eating without assistance is of incredible importance in such situations.
The Liftware website is also very specific in its instructions about who this device can help, what kind of tremors it can help with and to what degree. It provides a simple test to those who would like to know if they're suitable users for it and how much they recommend it usage. There also several videos that show how the spoon can be used and allows the potential buyers to view it being used by others in a similar situation as them.
Liftware spoon is quite affordable for what it offers. According to its amazon price, a starter kit of Liftware is ~200 CAD which for the services it provides should be worth it. It is also aesthetically pleasing in a simple and clean design in white color. It also comes with its charger in a small box that can be conveniently carried around.
The liftware spoon works by counteracting the tremors through two motors that work opposite to the tremors and stabilize the hand by using motion sensors in it. In case of liftware spoon for people with arm mobility issues, the sensor detects and works against unintentional tilting or tipping. A small onboard computer distinguishes between the unintentional movements and intentional hand movement.
Since the device uses magnets to attach the utensil attachments, they recommend consulting with healthcare providers if the user has a cardiac pacemaker or other electrical implants devices in their
bodies, but other than that it is safe to use like a regular utensil.
To provide a better experience, the Liftware spoons come in two different types that are directed for both people with tremors and those with hand and arm mobility limitations separately. Both are directed for their specific uses and work with sensor technology. It comes with 3 different utensil attachments of spoon, soup spoon and a fork.
It has a decent battery life of upto 1 hour of continuous use that could last for ~3 meals on a single charge.
It is even simple enough to clean by detaching the utensil and cleaning like regular ones, and the stabilizing handle can be cleaned with a sponge or disinfectant.
What makes this technology great that its made for some very specific uses that often taken for granted. They also have the research done with it to show an effectiveness of 70% less tremors that is available for users to look into and make intelligent choices.
http://onlinelibrary.wiley.com/doi/10.1002/mds.25796/abstract
https://www.amazon.com/Liftware-Steady-Starter-Kit/dp/B00JDSIOJE
https://www.youtube.com/watch?v=fS01kn6YJ94
Without further ado lets delve into its design and specifics.
IxD
The interactive design for this device is really simple and effective. It is basically a spoon (or a fork) that allows you to eat without a lot of help. People with hand/arm tremors find basic day to day activities quite hard and it usually affects not just their quality of life but also their confidence. Think about going to a restaurant, eating soups and things like ramen that need precise movements even by those with practised hands. However for people with severe conditions of tremor this means extremely slow eating, eating with assistance and sometimes preferring not to eat certain kinds of food! Doing something as simple as eating without assistance is of incredible importance in such situations.
The Liftware website is also very specific in its instructions about who this device can help, what kind of tremors it can help with and to what degree. It provides a simple test to those who would like to know if they're suitable users for it and how much they recommend it usage. There also several videos that show how the spoon can be used and allows the potential buyers to view it being used by others in a similar situation as them.
UI
Liftware spoon is quite affordable for what it offers. According to its amazon price, a starter kit of Liftware is ~200 CAD which for the services it provides should be worth it. It is also aesthetically pleasing in a simple and clean design in white color. It also comes with its charger in a small box that can be conveniently carried around.
The liftware spoon works by counteracting the tremors through two motors that work opposite to the tremors and stabilize the hand by using motion sensors in it. In case of liftware spoon for people with arm mobility issues, the sensor detects and works against unintentional tilting or tipping. A small onboard computer distinguishes between the unintentional movements and intentional hand movement.
Since the device uses magnets to attach the utensil attachments, they recommend consulting with healthcare providers if the user has a cardiac pacemaker or other electrical implants devices in their
bodies, but other than that it is safe to use like a regular utensil.
UX
To provide a better experience, the Liftware spoons come in two different types that are directed for both people with tremors and those with hand and arm mobility limitations separately. Both are directed for their specific uses and work with sensor technology. It comes with 3 different utensil attachments of spoon, soup spoon and a fork.
It has a decent battery life of upto 1 hour of continuous use that could last for ~3 meals on a single charge.
It is even simple enough to clean by detaching the utensil and cleaning like regular ones, and the stabilizing handle can be cleaned with a sponge or disinfectant.
Other details
What makes this technology great that its made for some very specific uses that often taken for granted. They also have the research done with it to show an effectiveness of 70% less tremors that is available for users to look into and make intelligent choices.
Bibliography:
https://www.liftware.com/http://onlinelibrary.wiley.com/doi/10.1002/mds.25796/abstract
https://www.amazon.com/Liftware-Steady-Starter-Kit/dp/B00JDSIOJE
https://www.youtube.com/watch?v=fS01kn6YJ94
Monday, 6 February 2017
Tech for the Visually Impaired: Keeping with the Theme
Trying to find a blog topic last time opened a whole world of options to me. However there were some innovations that just seemed too (for-lack-of-a-better-word) cool. So this week I'd like to build on tech advances made for the disabled by exploring the world of those visually impaired.
Airpoly Vision is a free application made by Airpoly designed to help people with visual impairments to identify objects and colors in the environment. It is available for download on the Apple App Store. Here is a link to their official video: https://www.youtube.com/watch?v=XMdct-5bERQ and supporting video : https://www.youtube.com/watch?v=h5g82YNmwmU
IxD
It has a really simple, easy to use and convenient design. Consider times when we have easily decided what to wear to colour coordinate with the perfect shoes, or walked into an office knowing we're walking into the right room or something as simple as knowing what's on your plate. While all these tasks seem ridiculously simple to those with proper vision, you can see the hardships involved in impaired vision. This app allows for users to overcome some very basic hardships. Asking for directions, reading road signs and even shopping alone is made easier. You only need to point the screen in the direction of/ at whatever needs to be labelled and take a picture for it to provide a speech or text of it. This allows people with visibility problems to visualize the space and things around them.
UI
The app has a simple and aesthetically pleasing look. How it works is, once the picture has been captured it gets uploaded to a server that uses a system of convolutional neural network to look through the pictures. It basically breaks it down to different points of interest that can be matched to a particular object. It not only provides an accurate name for the objects its given but also the description in terms of its color and its verb forms that show it being used example, "Man riding bicycle."
UX
There are other apps in the market that target this particular group, that rely mostly on crowdsourcing which works for the most part with extremely good accuracy, but has the downside of slow timing and not to mention the need for an internet connection. Airpoly on the other hand, works without an internet connection making it accessible at all times and even handy for privacy.
The user experience in general seems to be great. The app uses multiple language like English, Spanish, Portuguese etc to make it largely convenient and also allows other users to build on its knowledge of objects.
Another exciting feature of this app is the involvement of users. Its not the same as feedback after since it is just going to update its versions with more options and accuracy but anyhow, you can change its labeling if you know its wrong or not completely accurate. This is of course both extremely useful and also a little ironic since the target audience is the visually impaired who would be the ones needing the assistance and not the ones providing clarifications. Nonetheless this particular function would be useful to improve its accuracy as it has more and more items in its database.
Airpoly vision quite accurately recognizes plants, animals, colour and even currency. While USD is the only currency in its database right now, it is working on adding more. It has some other convenient functions like detecting darkness, it automatically switches on the torch in your phone and once the light is back on again, switches it off.
The app is also extremely sensitive with labeling a person as either man or woman and usually takes more time. It also has the ability to describe emotions at least the most visually visible ones like anger and happiness. However once again it takes considerable time before it calls someone angry.


Airpoly vision does not work in real time however, its best speeds are as less than 3-5 seconds with common day-to-day objects. According to their founders that is the one utility that they consider a true weakness. Its accuracies with objects seem to be increasing the more people use it. Another issue that is solved with more and more people using and correcting is misidentification. Another small issue is that you do need to direct it multiple times for smaller objects as it takes the background and surroundings into effect. So it might go "table..plate..food" to identify the food.
Apart from technological features there's a few other things I'd like to mention. Any sort of artificial intelligence always raises ethical questions about who can modify and use it and how much. One has to wonder who controls for the changes made by users as it is plausible to think of malicious users trying to affect its accuracy for competitive reasons. What's stopping a user from mislabelling things and considering this particular app is aimed at the visually impaired, a wrongful labeling could be, in the worst case, a matter of life and death. A signal colour being misidentified as green instead of red or construction warning signs being misidentified can be extremely dangerous.
I was unable to find it policies on user interference but I'd like to hope that like with any other AI technology, even Airpoly vision takes into account its ethical and moral dilemmas.
As with any tech innovation designed mainly for the disabled, I'd like to hope that feedback and considerations will be taken from visually impaired people.
In conclusion, this app seems to work great and I look forward to using it on Android as soon as its released.
Bibliography
https://techcrunch.com/2015/08/17/aipoly-puts-machine-vision-in-the-hands-of-the-visually-impaired/
https://coolblindtech.com/aipoly-vision-artificial-intelligence-for-your-ioss-camera/
http://tech.aipoly.com/
https://itunes.apple.com/us/app/aipoly-vision-sight-for-blind/id1069166437?mt=8
Airpoly Vision is a free application made by Airpoly designed to help people with visual impairments to identify objects and colors in the environment. It is available for download on the Apple App Store. Here is a link to their official video: https://www.youtube.com/watch?v=XMdct-5bERQ and supporting video : https://www.youtube.com/watch?v=h5g82YNmwmU
IxD
It has a really simple, easy to use and convenient design. Consider times when we have easily decided what to wear to colour coordinate with the perfect shoes, or walked into an office knowing we're walking into the right room or something as simple as knowing what's on your plate. While all these tasks seem ridiculously simple to those with proper vision, you can see the hardships involved in impaired vision. This app allows for users to overcome some very basic hardships. Asking for directions, reading road signs and even shopping alone is made easier. You only need to point the screen in the direction of/ at whatever needs to be labelled and take a picture for it to provide a speech or text of it. This allows people with visibility problems to visualize the space and things around them.
UI
The app has a simple and aesthetically pleasing look. How it works is, once the picture has been captured it gets uploaded to a server that uses a system of convolutional neural network to look through the pictures. It basically breaks it down to different points of interest that can be matched to a particular object. It not only provides an accurate name for the objects its given but also the description in terms of its color and its verb forms that show it being used example, "Man riding bicycle."
UX
There are other apps in the market that target this particular group, that rely mostly on crowdsourcing which works for the most part with extremely good accuracy, but has the downside of slow timing and not to mention the need for an internet connection. Airpoly on the other hand, works without an internet connection making it accessible at all times and even handy for privacy.
The user experience in general seems to be great. The app uses multiple language like English, Spanish, Portuguese etc to make it largely convenient and also allows other users to build on its knowledge of objects.
Another exciting feature of this app is the involvement of users. Its not the same as feedback after since it is just going to update its versions with more options and accuracy but anyhow, you can change its labeling if you know its wrong or not completely accurate. This is of course both extremely useful and also a little ironic since the target audience is the visually impaired who would be the ones needing the assistance and not the ones providing clarifications. Nonetheless this particular function would be useful to improve its accuracy as it has more and more items in its database.
Airpoly vision quite accurately recognizes plants, animals, colour and even currency. While USD is the only currency in its database right now, it is working on adding more. It has some other convenient functions like detecting darkness, it automatically switches on the torch in your phone and once the light is back on again, switches it off.
The app is also extremely sensitive with labeling a person as either man or woman and usually takes more time. It also has the ability to describe emotions at least the most visually visible ones like anger and happiness. However once again it takes considerable time before it calls someone angry.


Airpoly vision does not work in real time however, its best speeds are as less than 3-5 seconds with common day-to-day objects. According to their founders that is the one utility that they consider a true weakness. Its accuracies with objects seem to be increasing the more people use it. Another issue that is solved with more and more people using and correcting is misidentification. Another small issue is that you do need to direct it multiple times for smaller objects as it takes the background and surroundings into effect. So it might go "table..plate..food" to identify the food.
Apart from technological features there's a few other things I'd like to mention. Any sort of artificial intelligence always raises ethical questions about who can modify and use it and how much. One has to wonder who controls for the changes made by users as it is plausible to think of malicious users trying to affect its accuracy for competitive reasons. What's stopping a user from mislabelling things and considering this particular app is aimed at the visually impaired, a wrongful labeling could be, in the worst case, a matter of life and death. A signal colour being misidentified as green instead of red or construction warning signs being misidentified can be extremely dangerous.
I was unable to find it policies on user interference but I'd like to hope that like with any other AI technology, even Airpoly vision takes into account its ethical and moral dilemmas.
As with any tech innovation designed mainly for the disabled, I'd like to hope that feedback and considerations will be taken from visually impaired people.
In conclusion, this app seems to work great and I look forward to using it on Android as soon as its released.
Bibliography
https://techcrunch.com/2015/08/17/aipoly-puts-machine-vision-in-the-hands-of-the-visually-impaired/
https://coolblindtech.com/aipoly-vision-artificial-intelligence-for-your-ioss-camera/
http://tech.aipoly.com/
https://itunes.apple.com/us/app/aipoly-vision-sight-for-blind/id1069166437?mt=8
Sunday, 22 January 2017
Signing Gloves: Inclusive or Offensive?
Inclusivity is the term of our generation and something people are gradually striving to bring about in our day to day life. Everything from communication and accessibility to safe spaces and hiring regulations make it possible for minority groups and the less fortunate to find equal footing. Being personally passionate about social justice, I realized technological innovations made to better the lives of those with disabilities play a very important role. One such innovation that caught my eye was sign language translating gloves.
Several projects based on this concept have been floating around the internet but the one I'd like to discuss is the prototype developed by engineering students from Polytechnique Montreal in 2012.
The following is the link to its demonstration at TEDxUdeM:
https://www.youtube.com/watch?v=DpcI5h1EuqI
IxD
It's interactive design is simple; hand gestures that represent a sign language get translated to speech and text when plugged into a computer or smartphone. This allows people that are hard of hearing to communicate to a certain extent with hearing people.
UI
This product is easy to use and understand. It uses fibre optics, light sensors on the phalanx and detectors in the host device to transliterate sign language. While it is certainly quite effective, it isn't (at least in its early stages) very efficient.
It has several limitations which I'll discuss below.
UX
Since these gloves are only in prototype stages and not yet available in the market, it is hard to judge its overall user experience. However, the idea alone seems quite impressive. The user would only need to wear these gloves in order to communicate in their usual way.
There are several different aspects that need to be addressed for this particular innovation. Apart from its technological aspects, one must consider the societal impacts of this device. While in its most basic form this seems extremely useful, those familiar with the deaf community (and I am in no way an expert, I just had to read a lot) would be a little hesitant to accept it. The deaf community is just that, a community. It has its own language, behaviour, arts and value system. There are several people within the community that do not find the need to be hearing, and sometimes refuse to undergo surgery to receive cochlear implants. Certain people would argue that such a technological device only makes it easier for the hearing people, they are the ones this product benefits. It discourages hearing people from making an effort to communicate with people from the deaf community by understanding sign language, talking slower, etc. Being sensitive to those in the deaf community should be (and they really want to be, from what I could tell by the video) of the utmost priority while working on enhancing this device.
Coming to its technological aspects, the most important concern would be the accuracy of its translations. As you could see in the video (or the stills) it is quite difficult to capture it 100% accurately. It would need a lot of work to correctly interpret every word.
An issue one can imagine is same gestures differing in their placement which could mean different things. For example the hand gesture for both 'mother' and 'father' in American Sign Language happen to be the same, however the mother gesture is placed at the chin and the father gesture on the forehead (images provided below for reference). While the video doesn't demonstrate it, I'm hoping it resolves this problem since the glove deals with hand gestures and their position in space.
There is also the issue of portability, having wires hanging off your hands doesn't seem very appealing and especially not when this includes hand gestures and movement. They seem to be working on this specific issue about making it wireless. That would definitely improve user experience.
Time. Another factor that hopefully has been taken into account in its improvement stages. It seems to need repeated gesturing to make its translation for in practicality seems not very efficient for both the design or the user experience. Imagine having to talk one word at a time, but for the deaf community it would be repeating multiple hand gestures which could get really old really fast.
All in all I thought this was quite a good idea at least for those in the deaf community for when they are in unfamiliar situations where they either don't have someone to interpret for them or need immediate assistance. Like they mentioned in the video, it is critical to have people from the deaf community itself to be their critics and help in the improvement of this device. It would also be really cool to have a device that did the opposite and worked independently to create hand gestures for the hearing people to communicate with the deaf community.
Several projects based on this concept have been floating around the internet but the one I'd like to discuss is the prototype developed by engineering students from Polytechnique Montreal in 2012.
The following is the link to its demonstration at TEDxUdeM:
https://www.youtube.com/watch?v=DpcI5h1EuqI
IxD
It's interactive design is simple; hand gestures that represent a sign language get translated to speech and text when plugged into a computer or smartphone. This allows people that are hard of hearing to communicate to a certain extent with hearing people.
UI
This product is easy to use and understand. It uses fibre optics, light sensors on the phalanx and detectors in the host device to transliterate sign language. While it is certainly quite effective, it isn't (at least in its early stages) very efficient.
It has several limitations which I'll discuss below.
UX
Since these gloves are only in prototype stages and not yet available in the market, it is hard to judge its overall user experience. However, the idea alone seems quite impressive. The user would only need to wear these gloves in order to communicate in their usual way.
There are several different aspects that need to be addressed for this particular innovation. Apart from its technological aspects, one must consider the societal impacts of this device. While in its most basic form this seems extremely useful, those familiar with the deaf community (and I am in no way an expert, I just had to read a lot) would be a little hesitant to accept it. The deaf community is just that, a community. It has its own language, behaviour, arts and value system. There are several people within the community that do not find the need to be hearing, and sometimes refuse to undergo surgery to receive cochlear implants. Certain people would argue that such a technological device only makes it easier for the hearing people, they are the ones this product benefits. It discourages hearing people from making an effort to communicate with people from the deaf community by understanding sign language, talking slower, etc. Being sensitive to those in the deaf community should be (and they really want to be, from what I could tell by the video) of the utmost priority while working on enhancing this device.
Coming to its technological aspects, the most important concern would be the accuracy of its translations. As you could see in the video (or the stills) it is quite difficult to capture it 100% accurately. It would need a lot of work to correctly interpret every word.
An issue one can imagine is same gestures differing in their placement which could mean different things. For example the hand gesture for both 'mother' and 'father' in American Sign Language happen to be the same, however the mother gesture is placed at the chin and the father gesture on the forehead (images provided below for reference). While the video doesn't demonstrate it, I'm hoping it resolves this problem since the glove deals with hand gestures and their position in space.
Time. Another factor that hopefully has been taken into account in its improvement stages. It seems to need repeated gesturing to make its translation for in practicality seems not very efficient for both the design or the user experience. Imagine having to talk one word at a time, but for the deaf community it would be repeating multiple hand gestures which could get really old really fast.
All in all I thought this was quite a good idea at least for those in the deaf community for when they are in unfamiliar situations where they either don't have someone to interpret for them or need immediate assistance. Like they mentioned in the video, it is critical to have people from the deaf community itself to be their critics and help in the improvement of this device. It would also be really cool to have a device that did the opposite and worked independently to create hand gestures for the hearing people to communicate with the deaf community.
Subscribe to:
Posts (Atom)


