top of page

Conversational Artificial Intelligence - A Resolve for Deaf Inclusivity in India

Updated: Nov 13, 2022


If you are interested in applying to GGI's Impact Fellowship program, you can access our application link here.

 

Executive Summary :


The National Programme for Prevention and Control of Deafness (NPPCD) states India accounts for 63 million people in the Deaf and Hard of Hearing community (~ 6% of India's population).


Further, statistics reveal four in every 1000 children suffer from severe to profound hearing loss; 100,000 babies are born with hearing deficiency every year & the estimated prevalence of adult‑onset deafness in India comes to be 7.6% and childhood onset deafness has been witnessed to be 2% posing a significant predicament of the issue in the country.


Thus, these facts pose a greater revelation called out by the National Sample Survey (58th round) which after extensive surveying of the disability in Indian households - found that hearing disability was the 2nd most common cause of disability in India and is one of the top most cause of sensory deficit in the country.


Hearing impairment is a serious but grossly neglected condition in India. The country also suffers a huge economic impact due to lost productivity, higher unemployment, and lower wages for the hearing impaired. The real issue in India is the woeful inadequacy of facilities of any type for the deaf and this paper looks into the impact of technology with specifics in communication mechanisms presently prevalent in the Deaf community and the emergence and induction of Artificial Intelligence (Conversational AI) as a driving taskforce in bridging the gap of accessibility for the deaf in India.


1. Introduction:


1.1 The Definition & Appropriation of the term Deaf in India:


The WHO defines “deafness” as the complete loss or partial loss of hearing ability in one or two ears. The cases considered include hearing loss >90 dB in better ear (profound impairment) or total loss of hearing in both the ears.


However, in India, after several amendments - Section 2 (i and iv) of the persons with disability act. 1995, (PWD) recognized “hearing impairment” as a disability (minimum degree of disability ~ 40% as sanctioned by medical authority) and a "hearing disable person" as one who has a hearing loss of 60 dB or more in the better ear or total loss of hearing in both ears.”


The decision was recognized well as it was able to maximize inclusivity (for conversational range of frequencies) for those with severe hearing impairment under the hearing handicapped category of the act.

1.2 Deafness: Complex - Comprise Bundled Disabilities


In scientific terms, Deafness as a condition is complex and delves with multiple bundled disabilities. Children with hearing loss face extreme challenges in developing speech and language abilities. Thus, the very foundations being brittle, deaf kids suffer disadvantage in school, higher education, and future professional opportunities. With no standardized systems for imparting education, no sanctions on officiating the Indian Sign language and the medium of communication and learning being reduced to lip movement cues in institutions has resulted in complete dependence on the hearing community for bare necessities.


The term "Audism" (bias on the basis of hearing impairment leading to communication barrier) is prevalent in every day domestic/workplace routines.

2. Technology Overview:


2.1 Existing Self-Help Communication Mechanisms: Chatbots


Conventionally, with constraints of speech impairment and use of action gestures (Indian Sign language) to interact with inter/intra communities the advent of mobile phones has been a boon for the community, thanks to the text-chat feature which gave an easy and robust way to communicate thoughts without being hostile of the systemic challenges.


2.1.1. Chatbots:


The evolution of chat functionalities mutated to the creation of a pool of automated text response robots called as Chatbots. The chatbots instilled a sense of hope in being able to self-help people and create an autonomy much needed for the Deaf community. Its prevalence grew further with the induction of virtual chat agents in Contact center environments providing customer support across industries and sectors. This new channel of communication initiated an inclusivity stance to set the Deaf people at par with existing mechanisms of communication.


2.1.2. Technical Definition:


A rule based, bounded system that has well defined categories. They are not interactive but provide one-time responses i.e., one shot response systems based on the flow chart binary mechanisms.


2.1.3. Working Scheme & Challenges:


The chatbots had to be pre-fed with anticipatory questions and anything asked outside the scope would fail to be identified as it wasn't pre-trained with the line of queries being posed. Thus, though chatbots in terms of self-help desks did gave an exhilarating hype but when it comes to practicing autonomy it still poses challenges since it is not dynamically evolving in nature.

2.2 Artificial Intelligence (AI): What is Conversational AI ?


Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. AI systems typically work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to perform predictions about future states.


2.2.1. Conversational AI:


Technology that ultimately enables machines to truly interact with humans via language.

It's a subset of AI that leverages below concepts and makes them available to build useful applications for processing proactive text and getting things done eventually.


A) Machine Learning - Machine learning is a field of study that gives computers the ability to learn without explicitly being programmed via data training models.


B) Neural Networks - are computing systems with interconnected nodes that work much like neurons in the human brain. Using algorithms, they can recognize hidden patterns and correlations in raw data, cluster and classify it, and – over time – continuously learn and improve.


C) Components of Conversational AI:


  • Natural language understanding (NLU) - AI driven capability that can extract meaning from written text/spoken text.


  • Ability to deploy NLU to action - the toolbox that brings AI-powered conversations to work in tantum like screens, devices and other entities like Car & more.



2.3. Comparison: Chatbots Vs Conversational AI



Conversational AI

Chatbots

The tool to build Powerful advanced Interactive systems

Just one –shot response systems

Assist Users on any channels (Speech to text/Text to speech)

Aids one Channel – Text response

Work Scheme:

  • Also referred to as contextual chatbots or virtual agents—use machine learning, natural language processing, or both to understand user intent and form responses.

  • These bots can continuously learn from conversations with customers, so they’re able to deliver more helpful responses over time.

Work scheme:

  • These bots are similar to automated phone menus where the customer has to make a series of choices to reach the answers they’re looking for.

  • The technology is ideal for answering FAQs and addressing basic customer issues.

Use Cases:

  • Help business collect and analyze data to identify cracks in current processes, gather customer sentiment data and provide competitive edge with functionalities like: (a)Make every conversation searchable (b) Track calls with specific keywords to analyze frequently occurring queries.


Example: Health care support:

ü Diagnosis – seek questions,self-train.

ü Medical Scheduling – info on next visit, documentation & prescriptions

ü Therapy – Taking notes/summarizing

Use Cases:

  • Automate your website support.

  • Support customers inside mobile app.

  • Handle internal helpdesk support.

  • Collect customer feedback.

  • Order confirmation & track shipping.

  • Handle refunds & exchange requests efficiently.




3. Conversational AI – Implications in Deaf community Inclusivity


Conversational AI are focused on dialog systems that allow it to handle human variance. therefore, a C AI when it's helping an individual's intent can actually have a multi-turn dialogue, ask follow up questions, recognize their intent and then compensate when the user has unexpected response, when the user goes off-topic. The focus of Conversational AI in very simple terms is the ability to handle human variance, especially in dialogue.


3.1. Tackling the Bundling Disabilities:


AI powered communication tech uses an advanced form of automatic speech recognition to convert raw spoken language - ums,stutters and all - into fluent, punctuated text. the removal of disfluencies and addition of punctuations leads to higher quality translations into more than 60 languages that the tech supports.

The community of people who are deaf and hard of hearing can consider this cleaned-up and punctuated text as an ideal tool to access spoken language in addition to Indian Sign Language.


3.2. Tackling the Diverse Vernacular presence of Deaf in India:


Language is the driving force of human evolution. it enhances collaboration, communication and learning. Deep learning for speech recognition, gave speech tech human-like accuracy, leading to a machine translation system from english to vernacular languages.


3.3. Live Transcribing with robust Speech to Text/Text to speech capabilities (STT & TTS)


Few of the Advanced social bots with novel strategies are included in the areas of:


3.3.1 Conversational Speech Recognition

Considers characteristics of multi-speaker conversation like style, emotion - trained extensively than mere paired-text recognition.


3.3.2 Topic Tracking

Finding the state of art and the associated events in a conversation.


3.3.3 Voice User Experience

Speech recognition technology that allows people to interact with a computer, smartphone or other device through voice commands.


3.3.4 Tools for traffic management and scalability

This entails strong lexicon building, text mining & effective word search capabilities.



3.4. Dynamically Interactive bots with decisive feasibilities:


3.4.1 Natural Language Understanding

Subset of Natural language processing that digs deeper with machine reading comprehension & Content analysis through inputs given as text.


3.4.2 Context Modeling

Formal/semi-formal description of the context (of business, data) ·


3.4.3 Dialog Management

Enables to store user intents that came up during a conversation but were not acted upon immediately.


3.4.4 Response Generation

To determine the content of the response and how best to express it. The system’s verbal output is generated as a stretch of text and passed to the text-to-speech component to be rendered as speech.


3.4.5 Knowledge Acquisition

The process of extracting, structuring and organizing knowledge from one source.


Thus, the robust system of components backed with core strategies are more than sufficient in bringing about Solution based interactions for the deaf community.


4. Pioneers in Conversational AI : adoption and based solutions:


4.1. Kore.ai: (Global)


A) An enterprise-grade, the no-code platform enables business users, non-developers and non-data scientists to leverage conversational AI technology to easily build conversational UIs, virtual assistants and process workflows for a variety of use cases.


B) The platform is used by 200+ Fortune 2000 companies across the globe for implementing a wide range of customer and employee experience critical capabilities.


4.2. Yellow.ai (India)


A) One of the top Indian conversational AI start-ups with the world’s leading conversational CX platform for consumer-centric brands.


B) The total customer lifecycle offers a wide range of platforms such as bot studio, agent-assist dashboard, customer support automation, conversational commerce automation, HR & ITSM automation.


Sectors with Maximum adoption of Conversational AI: Global Vs India

Global

E-Commerce, Medicine, Human Resource, Travel & Real-Estate

​India

Communication, OTT (over the top), gaming, technology & financial services

Note: This table presents a stark reality of the priority/need of leveraging emerging technologies in pivotal sectors of the society Globally as compared to the Indian counterparts.



Conclusion :


The problem of the child deaf from the birth is quite different from that of the adult who has become completely deafened after school age or in adult life. The hard of hearing person whose deafness has developed slowly over the years is different again. But, for all of them, the handicap is the same – the handicap of the silent world, the difficulties of communicating with the hearing and speaking world.


Thus, it’s time to build stronger ecosystems of technology, encourage the budding incubation culture of India and incentivize to adopt and leverage emerging technologies as these scientific tools pave way to surpass systemic intrusions oppressing the specially abled and empower them with equity to bring about equality in the society.


The new generation buds a new hope for Deaf inclusivity in India for the times to come.



Meet The Thought Leaders


Karan Patel (he/him) is a mentor at GGI an undergraduate from IIT Madras. He is correctly employed with Teachmint, an ed-tech start-up in their strategy team. Prior to Teachmint, he worked at Dalberg Advisors as an analyst where he worked with multi-laterals and international foundations on gender, education and energy sectors. He has also interned in MIT Sloan, Qualcomm and IIM Ahmedabad giving him a plethora of experience in the corporate and academic world. He also started his own venture in hyperlocal air-quality monitoring. Karan is an avid sport-person and masala chai fantatic.



Meet The Authors (GGI Fellows)


Gautam Aggarwal, based out of Gurgaon, Haryana. Currently pursuing his bachelor's in business from Shaheed Sukhdev College of Business Studies, He hold a keen interest in startups and consulting. During his journey at CBS, he has interned with companies like Anthill Venture, The 80/20 Consultants, Meteor Ventures and GGI has been the greatest platform to nurture his interests in the field of consulting.



Arvind Mohan is presently pursuing his PGD in Liberal Arts as a Young India Fellow, Ashoka University. He completed his bachelor’s in Electrical Engineering from Delhi Technological University, working as an advisory intern at PwC in their Power & Utility Practise. Intrigued by Technology, Arvind moved onto the IT space as a Senior Software Engineer in Bank of America for the last 3 years. During this tenure, he was deeply involved in a lot of the ESG/CSR diligence for the Bank and this led to the quest for working in the social impact sector, and he subsequently became an impact fellow candidate at the Global Governance Initiative ’22. Arvind is a sports buff, a Blackbelt in Karate and a State Gold Medalist in Kickboxing. He has represented cricket at various levels and was inducted into the Delhi Daredevils Academy'18. His passion for sports led to the pursuit of Indian Air Force, clearing CDS/AFCAT thrice. His creative pursuits are a small caricature venture and an NFT collection in the open sea marketplace and is also certified in Indian Sign Language, presently volunteering with an NGO for the Deaf to build UI/UX prototypes. For Arvind, the GGI is a stepping stone of pivoting sectors and doing something one innately believes in.



If you are interested in applying to GGI's Impact Fellowship program, you can access our application link here.


 



104 views0 comments

Comments


bottom of page