Prototyping Wearables for Supporting Cognitive

Prototyping Wearables for Supporting Cognitive Mapping
by the Blind: Lessons from Co-Creation Workshops
Wallace Ugulino
Hugo Fuks
Pontifical Catholic University
of Rio de Janeiro (PUC-Rio)
Av Marques de São Vicente, 255
Gávea, Rio de Janeiro
Pontifical Catholic University
of Rio de Janeiro (PUC-Rio)
Av Marques de São Vicente, 255
Gávea, Rio de Janeiro
[email protected]
[email protected]
This paper describes the co-creation workshops we carried out
with three groups composed of blind users, mobility instructors,
designers, and computer engineering students. The wearables
prototyped by these groups combine verbalized warnings with
haptic and audio feedback aiming at supporting blind persons in
the task of identifying landmarks. The identification of landmarks
is an essential skill required for the cognitive mapping and spatial
representation. Since 2013 we have been worked on the
investigation of wearables for supporting landmark identification
and, thus, the cognitive mapping by the blind. The prototypes we
describe in this paper were used in 82 trials in 2 empirical studies
and are now being replicated for supporting mobility-training
Blind Mobility, Spatial Representation, Landmark Identification,
Wearable Computing.
The identification of landmarks plays a crucial role in the
locomotion process of blind persons [1]: it helps the orientation
process, gives contextual information, helps the planning of
routes, and helps avoiding obstacles and hazardous situations.
According to the inefficiency and difference theories of spatial
representation by blind persons [2], [3], this information is useful
for the cognitive mapping.
The problem in providing landmark information to the blind lies
not only in detecting landmarks, but also in the way this
information is presented to the blind. The negligence with human
factors is considered to be one of the main causes blind persons do
not adopt new assistive technologies [4]. For instance, individuals
that acquire blindness usually take some time to accept the whitecane during their period of mourning, but they finally accept it
because the benefits outweigh the hassle associated with the cane.
For any new technology the golden rule is to minimize the hassle
and maximize the benefits.
Permission to make digital or hard copies of all or part of this work for personal
or classroom use is granted without fee provided that copies are not made or
distributed for profit or commercial advantage and that copies bear this notice
and the full citation on the first page. Copyrights for components of this work
owned by others than ACM must be honored. Abstracting with credit is
permitted. To copy otherwise, or republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee. Request permissions
from [email protected]
WearSys'15, May 19, 2015, Florence, Italy.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3500-3/15/05…$15.00.
The masking phenomenon is a problem caused by technology that
consists in a cognitive overload and/or the harmful interference of
technology in the wearer's ability of sensing the environment [5]–
[8]. Avoiding masking is one of the main challenges faced by
designers of wearable devices for blind pedestrians when
designing Electronic Travel Aids (ETA) and Electronic
Orientation Aids (EOA).
In order to develop wearables for supporting landmark
identification (and, thus, cognitive mapping), this research used a
method - documented in [9] - that combines an ethnographic
study with co-creation workshops. During the co-creation
workshops, blind subjects and mobility instructors worked
together with designers and computer scientists to produce three
different low fidelity (lo-fi) prototypes. We then used these lo-fi
prototypes to build a high-fidelity (hi-fi) prototype. The hi-fi
prototype was used later in 82 trials in two empirical studies
(beyond the scope of this paper).
This paper is organized as follows: Section 2 presents the
literature review, with a discussion on the theories of spatial
representation and a list of related works. Section 3 describes the
observational study and the main lessons we learned with these
observations. Section 4 describes the co-creation workshops, and
the wearable prototypes. Finally, the conclusion and future works
are discussed in Section 5.
The theories about spatial representation by the blind came from
the Molyneux’s Problem [10]. From the initial set of questions,
three answers arose: deficiency, inefficiency, and difference
theories [2], [3].
The first dominant theory – the deficiency theory – says the blind
could not acquire a spatial representation because it depends on
the association between sight and other senses [11]. Locke and
Molyneux, 16th century philosophers, agreed with the deficiency
theory. George Berkeley [11] [12] is another early supporter of
the deficiency theory. A particular experiment – conducted by
William Cheselden [13] – on a boy born blind, who had surgery
for cataracts at 14 years old, initially corroborated the deficiency
theory, but many problems in the experiment brought even more
discussion and the subject remained inconclusive. Later on, based
on observations concerning the sight of newly born animals and
babies, another theory took place – the inefficiency theory – in
which the blind man is considered able to acquire spatial
representation, despite the absence of visual information. The
inefficiency theory, however, says this spatial representation is
necessarily less efficient in blind than in sighted persons. Some
known supporters of this theory are: Johannes Müller, Hermann
von Helmholtz, and Adam Smith [14].
Nowadays, difference theory is the most accepted one. It agrees
with inefficiency theory concerning the blind's ability to acquire
spatial representation, but diverges because it says the difference
between blind and sighted person’s spatial representation is
qualitatively different – and not necessarily less efficient. For
instance, a blind person may prefer a longer path as long as it is
easier to remember (avoiding cognitive overload). In this case, the
“distance” does not seem like the best optimization function for
the blind, but maybe the number of clues or landmarks in the path.
Susanna Millar [15], [16] and Simon Ungar [17]–[19] are
amongst the researchers that support this theory. Our research
draws from the difference theory. As a result, it takes as a premise
the ability of acquiring spatial representation by the blind and
aims at supporting this acquisition by means of wearable devices.
The research on blind mobility may benefit from the framework
defined by Michael Brambring [1], in which the problem of
locomotion of blind pedestrians is divided in minor problems.
Brambring divided the locomotion problem in two categories:
problems of perception and problems of orientation. Perception
problems are related both to the capacity of perceiving and
avoiding obstacles (first minor problem), and the capacity of
identifying landmarks in a path (second minor problem).
Orientation problems are related both to the difficulty of
orientation in near-space (spatial orientation, or the space within
the reach of haptic exploration - aprox. 3ft) and far-space
(geographical distances orientation) [20], [21].
From the Perception category, Obstacle & Avoidance ETAs have
been largely investigated, while Landmark Identification ETAs
received little attention so far [22]–[27]. The most common
assistive technology for obstacle detection is the white cane, and
the most mature ETA for obstacle detection is the Ultra Cane – an
electronic cane that uses vibration motors to indicate obstacles
both at the level of the ground, hips and head [22]. No specific
work on Landmark Identification was found during our literature
review, but some EOAs designed for navigation could help the
blind identify objects that could serve as Landmarks, like works in
Devices for navigation using GPS are the most common EOAs
investigated in the Orientation category. Very often, these systems
try to solve both the Spatial Orientation and Geographical
Orientation problem, as the former implicates directly in the latter
(this implication is also depicted by Brambring in his model).
Among these systems, GéoTact [5] stands out for the way it
communicates to the wearer about which directions to take:
instead of saying “turn right” (for example), it informs directions
by a metaphor of clocks – for instance, 12 o’clock means straight
and 2 o’clock means a slight turn to the right. The authors do not
say whether this approach is better or worse than the traditional
way used by cars.
Some efforts in the Orientation category used an approach of
mapping images to auditory or tactile displays [32]–[34] (and
also [31] for obstacle detection). This mapping approach has been
consistently related to the masking problem, as the auditory
pattern takes too much effort from the wearer to be understood.
The mapping of images into auditory patterns results in overly
complex patterns, requiring too many hours of training and also
impairing the wearer’s pace of gait because they must interpret the
sound before they can take action. This problem is well described
by Borenstein and Ulrich after their experimental tests with
The problem with this method lay in the fact that a
considerable conscious effort was required to
comprehend the audio cues. Because of the resulting
slow response time, our test subjects could not travel
faster than roughly 0.3 m/sec (1 foot/sec). And even this
marginal level of performance required hundreds of
hours of training time. [35] (p.1284)
The mapping of images to tactile displays is also hard to interpret
and may be even worse in the matter of communicating to the
wearer because the skin’s sensibility varies and the most sensible
areas are usually small (like fingertips). In addition, the loss of
limb’s sensibility (peripheral neuropathy) is very common in blind
persons when blindness comes together (or as a result) with
diabetes [36]. The harmful interference of ETAs and EOAs in the
blind person’s ability to pick up environmental cues is called
“masking”. The masking problem is one of the most frequent side
effects of the technology because sight communicates a lot of
information in a very efficient way through the optical nerve and
no other sense seems to be capable of doing it so efficiently. The
ideal solution is to make the blind capable of seeing, but that
doesn’t seem achievable in short-term research. To the best of our
knowledge, no device is capable of communicating by using the
optical nerve, so researchers must rely in different “paths” for
giving environment information to the brain. These different paths
do not have the necessary “bandwidth”, which requires the
designer to be careful when selecting what and how to
communicate environmental cues to the wearer. Therefore,
"masking" stands as a challenge for designers of wearable
assistive technology for the visually impaired.
Our research goal is to support the blind in the identification of
Landmarks in order to help spatial representation acquisition.
Because we have to communicate sensed information to the
wearer, our research relies on the same risks of masking as the
research with obstacle detection & avoidance or navigation. In
order to avoid or minimize the occurrence of the masking
phenomenon, we built three wearables prototyped by potential
users and technical staff, as discussed in the subsequent sections.
We started this research by observing blind subjects in order to
understand how they make spatial references, what elements in
space are used as reference, and what elements they can't
perceive. The observational study we made is classified as real
life observation, non-structured, non-participant, and individual
[37]. This phase of the research lasted 8 months and started in
Throughout the whole research we had the support of the
Benjamin Constant Institute (IBC), a Brazilian institution that has
devoted 160 years to the education and support of the blind. The
institution has its own ethical rules and the research procedure
was evaluated and approved previously
We were authorized to work with adult students of the
rehabilitation department; most participants are late blind, but we
also had some born blind students participating. All participants
were asked to authorize us and sign a consent form in which they
declare if and how they want to volunteer for the research, as
Brazilian laws forbid us to pay for the participation of subjects in
research. The lessons we learned during this observational phase
are listed in Table 1.
Table 1. Lessons from the observational phase of the research
Blind individuals are
inadequacy problems
Observed individuals
Born blind, low vision, and late
blind individuals
Strategies for the cognitive mapping
Born blind
Difficulties in geographical orientation
Born blind
Born blind
Born blind and
We used pseudonyms for referencing all participants in this paper.
Difficulties in spatial orientation
Difficulties in getting an overview of a scene
Concerning postural problems, one mobility instructor said these
are the first issues he works on with his students. The most
common postural issues are: round shoulders, forward head and
asymmetric feet position. According to Thales and [38], rounded
shoulders are usually caused by a protective posture against
unexpected encounters.
The posture of low vision people is usually bad because they're
trying to correct their field of vision. They usually have 'Forward
Head' posture or an inclination to one side, and also develop
round shoulders in order to avoid .
[Thales, 30 y.o., physiotherapist and mobility instructor].
Text 1. Postural problems of the blind
Blind persons use fixed elements as references, but they also use
temporary cues as the smell of a grocery store, for instance. They
use walls for helping them to walk in a straight line. The
references are used especially for assuring they're in the right path
or for changing directions, but they also count references
sometimes. In that case, reference counting is preferred over step
counting (Text 2).
I don't like counting anything, either references or steps.
Counting things makes me sleepy! (...) I think I have all these
paths entangled in my 'head' [pointing at her head]. I know that
at some point I'll reach references A, B, and C. I know that if I
change my path I'll see other references and I know how to reach
my destinations through them. I have it all in my mind. [Manu]
Text 2. Orientation strategies based on references
One of our blind participants declared she doesn't always prefer
the shortest path. According to her, it's not always the best choice
as it can be dangerous or have few references to help her orientate
herself. The best choice, for her (Text 3), is a path with more
How do I choose between two paths? If the longest path is
'better', then I choose it. Someone may say to me: common,
choose the shortest..." NOOO! (emphasis) What if the shortest is
too open or has no references? The best path, for me, is the one
with more references. [Manu].
Text 3. Preference for paths with more references
The most common references used in the mobility course are:
texture of the floor, drafts, walls, fences, plants among others.
 Round shoulders,
 Forward head
 Asymmetric feet position
 Graphs, linked list, and doubly liked list
 Strategies for collecting references
 Main difficulties
 Step counting vs. external referencing
 Cognitive overload associated
 Difficulties in walking straight forward,
 Difficulties in body centric referencing
 Difficulties in external referencing
 Strategies used by blind persons
For the design phase, we used co-creation workshop sessions with
participants ranging from engineering to design students, and
from sighted people to blind participants, and also mobility
instructors. There were 13 participants, as listed on Table 3.
The workshop started with a brainstorming session on the main
problems a blind person faces when walking without a sighted
guide. During this session, instead of just listing ideas on the
board, we decided to read ideas aloud from time to time, in order
to make it possible for the blinds to participate.
Table 3. Participants in the co-creation workshop
Mary Louise
Gunther B.
51yo / Female
20yo / Male
24yo / Female
17yo / Female
25yo / Male
Sighted /
Born blind
Rachel Green
Johnny Duque
28yo / Female
19yo / Male
31yo / Female
27yo / Female
26yo / Male
Born blind
22yo / Female
55yo / Male
25yo / Male
Late blind
Late blind
Eng. Student
MSc Design Student
Eng. Student
DSc. Informatics
Design Student
Eng. Student
Mobility Instructor
Master in Literature
MSc. Informatics
Design Student
Many problems were listed and the group was told to define
guiding criteria for the design. The group decided the main
guiding criteria as:
Using embedded technology in wearables already worn
by the blind, like watches, glasses and white canes;
Keeping their hands free;
Never blocking their ears with earplugs or something
After defining the guiding criteria, they were divided in 3 groups,
each group with – at least – one blind user. The mobility
instructor was told to be part of a group with only one blind
person, in order to balance the number of stakeholders inside each
group. Also, we chose to put at least one designer and one
engineering student in each group.
For the prototyping session, the method used was Blank Model
Prototyping (BMP) [39], which requires the participation of
potential users (we had 4 blind participants), and design and
technology professionals. Blank Model Prototyping is a rapid
role-playing technique that uses readily available art and craft
materials to construct rough physical representations of a
technological concept, according to a predetermined scenario. The
method was used with the intent to collect potential user
impressions and detailed ideas about the wearable to be built for
the experimental part of this work. Although most of our
participants had already worked with BMP in the past, we found it
important to give them instructions on BMP to keep the group on
the same page. Fig 2 shows participants during the role-playing
landmark identification, we chose the radio-based solution: it is
simpler and faster to develop than machine learning models for
computer vision (as it is required in the other prototypes).
A “Digital Beacon” identified each landmark, like a door, chair,
entrance halls, hallways, etc. Our beacon is a tangible device
made with microcontrollers (we used Arduino Mini Pro), 4 AA
batteries, electronic components for voltage regulation, and a
433Mhz radio transmitter. Figure 2 shows a digital beacon, and
two versions of the Smart Glasses made to communicate with
these beacons.
Each group produced their prototypes in 2 hours with their
workgroups and then we moved on to the test phase. The roleplaying part of the method consists in using the prototypes in a
role-playing session (Figure 1). Each group took 10 minutes to
present each feature of their prototype.
Figure 2. Digital Beacons and Smart Glasses
The wearable consisted in glasses with embedded computing. For
the first version, we used a 3D model for printing the glasses, but
we changed to manufactured protection glasses (embedding our
technology in it) because of ergonomic issues. We also built a
glove and a belt with speakers and vibration motors in order to
test different modes of feedback with our subjects, as shown in
Figure 3.
Figure 1. Testing prototypes ideas in a role-play session
Many features were discussed and presented during the roleplaying sessions. For instance, for the feature “communicate new
landmarks to the wearer”, each group chose a different solution:
group 1 chose to use beeps and midi sounds (1 sound for each
kind of landmark); group 2 decided to use sequences of vibrations
with vibration motors mounted on a watch, and group 3 specified
a text-to-speech solution.
The technology chosen for the task of identifying new landmarks
was the same in 2 prototypes: image processing on images
obtained from mounted cameras. Group 2 chose a solution based
on radios, which also incorporated cameras. As a result, the
wearable would listen to the air trying to identify digital beacons.
When it wasn’t possible to identify the digital beacons, the wearer
must point the watch to a direction and wait for the analysis of the
In order to develop and test each feature, we decided to use an
iterative development process. The first step was to evaluate the
viability of each solution. Because masking is a frequent
phenomenon in ETAs and EOAs, we decided to build a simpler
version of the prototypes for investigating the masking effect. For
Figure 3. Smart Glasses, Glove and Belt
The Glasses embedded a microcontroller Arduino Mini Pro
16Mhz 5v, a 433Mhz receiver module, Li-Po batteries 3.7v,
voltage regulators (5v and 3.3v), a USB charging module, a
WTV020-SD sound module (to provide verbalized warnings for
the wearer, every time a new beacon is found), and a Bluetooth
LE module (AdaFruit nRF8001) for the communication with the
iOS App used to configure the wearables and the test modes. The
glove and the belt were made with the same components of a
Digital Beacon, but we added speakers and vibration motors for
making it possible to test different feedback modes.
This paper presented our approach for prototyping wearables for
supporting spatial representation acquisition by blind people. The
work started with an observational study in which we learned the
main problems related to the mobility of blind pedestrians. We
were able to correlate the masking phenomenon (described in
literature) with the testimonials of our blind participants. We then
produced a high-fidelity prototype based on the lessons we got
from the previous stages. This prototype is now being used in
mobility-training sessions and we keep learning from them. In this
way, this work contributes with an approach for prototyping
wearables for blind users.
Future works include a new version of the Smart Glasses that will
feature a camera mounted between the lenses. The camera will be
used for the identification of landmarks. Cameras need to be
pointed at the object that is to be sensed, while radio-based
solutions (like our beacons) have a perimeter in which the radio
waves are captured by antennas. These differences may influence
the wearer’s posture and it may influence his performance when
finding landmarks, his posture, and also his social behavior. The
investigation on these differences represents a second goal in the
next steps of this research.
The National Council of Technological and Scientific
Development (CNPq) supports this research under the project
number #458.766/2013-5. Wallace Ugulino receives a grant from
CNPq (#385535/2013-9).
[1] M. Brambring, “Mobility and orientation processes of the
blind,” in Electronic spatial sensing for the blind, D. H.
(Nijhoff) Warren and E. R. (Nijhoff) Strelow, Eds. Dordrecht,
Netherlands: Nijhoff, 1985, pp. 493–508.
[2] S. K. ANDREWS, “Spatial cognition through tactual maps,”
in 1st Int. Symposium on Maps and Graphics for the Visually
Handicapped, 1983, pp. 30–40.
[3] J. F. Fletcher, “Spatial Representation in Blind Children:
Development Compared to Sighted Children.,” J. Vis. Impair.
Blind., vol. 74, no. 10, pp. 381–385, 1980.
[4] E. M. BALL, “Electronic Travel Aids : An Assessment,” in
Assistive technology for visually impaired and blind people,
London: Springer, 2008, pp. 289–321.
[5] R. Farcy, R. Leroux, A. Jucha, R. Damaschini, C. Grégoire,
and A. Zogaghi, “Electronic Travel Aids and Electronic
Orientation Aids for blind people: technical, rehabilitation and
everyday life points of view,” in Conference & Workshop on
Assistive Technologies for People with Vision & Hearing
Impairments Technology for Inclusion, 2006, no. Brabyn
1982, p. 12.
[6] J. a Brabyn, “New developments in mobility and orientation
aids for the blind.,” IEEE Trans. Biomed. Eng., vol. 29, no. 4,
pp. 285–9, Apr. 1982.
[7] L. Kay, “A sonar aid to enhance spatial perception of the
blind: engineering design and evaluation,” Radio Electron.
Eng., vol. 44, no. 11, p. 605, 1974.
[8] L. Kay, “An Ultrasonic Sensing Probe as a Mobility Aid for
the Blind,” Ultrasonics, vol. 2, no. 2, pp. 53–59, 1964.
[9] H. Fuks, H. Moura, D. Cardador, K. Vega, W. Ugulino, and
M. Barbato, “Collaborative Museums: an Approach to Co-
Design,” in Proceedings of the ACM 2012 conference on
Computer Supported Cooperative Work - CSCW ’12, 2012,
pp. 681–684.
[10] J. Locke, An Essay Concerning Humane Understanding, 4th
ed. Oxford: Clarendon Press, 1975.
[11] G. Berkeley, An essay towards a new theory of vision. 1709.
[12] G. Berkeley, A new theory of vision and other select
philosophical writings. 1922.
[13] W. Cheselden, “An Account of some Observations made by a
young Gentleman, who was born blind, or lost his Sight so
early, that he had no Remembrance of ever having seen, and
was couch’d between 13 and 14 Years of Age,” Philos.
Trans., vol. 402, pp. 447–450, 1728.
[14] H. von Helmholtz, Handbook of physiological optics. 1925.
[15] S. Millar, Understanding and representing space: Theory and
evidence from studies with blind and sighted children. Oxford:
Clarendon Press/Oxford University Press, 1994.
[16] S. Millar, “Understanding and representing spatial
information,” Br. J. Vis. Impair., vol. 13, no. 1, pp. 8–11, Mar.
[17] S. Ungar and M. Blades, “Strategies for Organising
Information While Learning a Map by Blind and Sighted
People,” Cartogr. J., vol. 34, no. 2, pp. 93–110, 1997.
[18] S. Ungar, M. Blades, and C. Spencer, “Visually impaired
children’s strategies for memorising a map,” Br. J. Vis.
Impair., vol. 13, no. 1, pp. 27–32, Mar. 1995.
[19] S. Ungar, “Cognitive Mapping without Visual Experience,” in
Cognitive mapping: past, present, and future, 2000, p. 221.
[20] M. A. HERSH and M. A. JOHNSON, “Mobility : An
Overview,” in Assistive technology for visually impaired and
blind people, London: Springer, 2008, pp. 167–208.
[21] M. A. HERSH and M. A. JOHNSON, “Disability and
Assistive Technology Systems,” in Assistive technology for
visually impaired and blind people, 2008, pp. 1–50.
[22] B. HOYLE and D. WATERS, “Mobility AT : The Batcane (
UltraCane ),” in Assistive technology for visually impaired and
blind peoples, London: Springer, 2008, pp. 289–321.
[23] T. Ifukube, T. Sasaki, and C. Peng, “A blind mobility aid
modeled after echolocation of bats.,” IEEE Trans. Biomed.
Eng., vol. 38, no. 5, pp. 461–5, May 1991.
[24] K. Ito, M. Okamoto, J. Akita, T. Ono, I. Gyobu, T. Takagi, T.
Hoshi, and Y. Mishima, “CyARM: an alternative aid device
for blind persons,” in CHI ’05 extended abstracts on Human
factors in computing systems - CHI '05, 2005, p. 1483.
[25] S. K. Bahadir, V. Koncar, and F. Kalaoglu, “Wearable
obstacle detection system fully integrated to textile structures
for visually impaired people,” Sensors Actuators A Phys., vol.
179, pp. 297–311, Jun. 2012.
[26] S. Shoval, J. Borenstein, and Y. Koren, “Mobile robot
obstacle avoidance in a computerized travel aid for the blind,”
in Proceedings of the 1994 IEEE International Conference on
Robotics and Automation, 1994, pp. 2023–2028.
[27] C. Jacquet, Y. Bellik, and Y. Bourda, Electronic Locomotion
Aids for the Blind: Towards More Assistive Systems, vol. 19.
Berlin/Heidelberg: Springer-Verlag, 2006.
[28] F. Dramas, B. Oriola, B. G. Katz, S. J. Thorpe, and C.
Jouffrais, “Designing an assistive device for the blind based on
object localization and augmented auditory reality,” in
Proceedings of the 10th international ACM SIGACCESS
conference on Computers and accessibility - Assets ’08, 2008,
p. 263.
[29] P. B. Meijer, “An experimental system for auditory image
representations.,” IEEE Trans. Biomed. Eng., vol. 39, no. 2,
pp. 112–21, Feb. 1992.
[30] A. Hub, T. Hartter, and T. Ertl, “Interactive tracking of
movable objects for the blind on the basis of environment
models and perception-oriented object recognition methods,”
in Proceedings of the 8th international ACM SIGACCESS
conference on Computers and accessibility - Assets ’06, 2006,
p. 111.
[31] G. Sainarayanan, R. Nagarajan, and S. Yaacob, “Fuzzy image
processing scheme for autonomous navigation of human
blind,” Appl. Soft Comput. J., vol. 7, no. 1, pp. 257–264, Jan.
[32] J. Zelek, R. Audette, J. Balthazaar, and C. Dunk, “A Stereovision System for the Visually Impaired,” University of
Guelph, 1999., 1999.
[33] A. Hub, J. Diepstraten, and T. Ertl, “Design and development
of an indoor navigation and object identification system for the
blind,” in Proceedings of the ACM SIGACCESS conference on
Computers and accessibility - ASSETS ’04, 2004, p. 147.
[34] S. Meers and K. Ward, “A Substitute Vision System for
Providing 3D Perception and GPS Navigation via ElectroTactile Stimulation,” Int. Conf. Sens. Technol., no. November,
pp. 551–556, 2005.
[35] J. Borenstein and I. Ulrich, “The GuideCane-a computerized
travel aid for the active guidance of blind pedestrians,” in
Proceedings of International Conference on Robotics and
Automation, 1997, vol. 2, no. April, pp. 1283–1288.
[36] D. M. Nathan, “Long-term complications of diabetes
mellitus,” N. Engl. J. Med., vol. 328, no. 23, pp. 1676–1685,
[37] M. de A. MARCONI and E. M. LAKATOS, Fundamentos de
Metodologia Científica. Atlas, 2010.
[38] W. R. Wiener, R. L. Welsh, and B. B. Blasch, Eds.,
Foundations of Orientation and Mobility, Third edition :
Volume I, History and Theory, Vol 1. AFB Press., 2010.
[39] J. Arnowitz, M. Arent, and N. Berger, Effective Prototyping
for Software Makers. Elsevier, 2010.