NTW Logo (Black)

About IEEE

IEEE Membership

Products and Services

Conferences

IEEE Organizations

 

IEEE Nav Bar

 


 

 

https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/onepixel.gif

 

IEEE Signal Processing Society Santa Clara Valley Chapter


https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/onepixel.gif

 

 


Click here for see the full list of upcoming events.


Wednesday, November 12, 2014

Intelligent Personal Assistants and Signal Processing Challenges

Speaker:

   Dr. Asli Celikyilmaz

   Senior Scientist, Natural Language Understanding, Microsoft

 

Location:

  AMD Commons Theater, 991 Stewart Dr., Sunnyvale, CA (map or Google Maps)

 

Schedule:  

   6:30pm: Networking/Light Dinner

   7:00pm: Announcements

   7:05pm: Presentation

   8:15pm: Adjourn

 

Cost:

  Free.

 

Abstract:

Following the rapid proliferation of mobile devices, especially smart phones, multimodal virtual personal assistant (VPA) applications such as Apple Siri, Microsoft Cortana and google Now started to emerge. With the advances in speech recognition, language understanding, and machine learning coupled with client-side capabilities coming with larger screens that enable multi-touch displays and server-side capabilities based on cloud computing, these applications have begun to move beyond conventional simple command/control based speech applications. One of the core technologies in a VPA system is understanding what the users are saying, called as spoken language understanding (SLU). In the last decade, a variety of practical goal-oriented spoken dialog systems have been built for limited domains, employing "targeted" SLU capabilities. Given an utterance, SLU in dialog systems extracts predefined semantic information from the output of an automatic speech recognizer (ASR). This semantic information usually contains the intent of the user and associated arguments (slots), matching the back-end capabilities. The dialog manager (DM) then determines the next machine action given the SLU output. In this talk I highlight some of the technical challenges and research efforts for multimodal virtual personal assistant applications, especially focusing on spoken language understanding and dialog aspects, pointing out issues and opportunities in this area.

 

Biography:

Asli Celikyilmaz is a Senior NLP Scientist at the Language & Intent group at Microsoft Silicon Valley. Asli's research interests are natural language processing, spoken language understanding, and machine learning (specifically structured prediction, unsupervised learning of parameters and features). Prior to Microsoft, Asli was a postdoc researcher at the Computer Science Department of University of California, Berkeley between 2008-2010 working on machine learning methods on automatic text summarization and question answering. Asli has co-authored 50+ papers on AI, Natural language processing including Question answering, summarization and spoken language understanding.

https://research.microsoft.com/en-us/people/aslicel/

 

20141112Photo.jpg

* from-left: Radhakrishna Giduthuri, Asli Celikyilmaz

 

A joint meeting co-sponsored by Santa Clara Valley IEEE CES Chapter and IEEE WIE AG


Subscribe to future announcements: link


 

 

 

https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/pixb.gif

https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/onepixel.gif

If you would like to contact the IEEE Webmaster, please email the chapter's secretary.
© Copyright 2000, IEEE.   Terms & Conditions.  Privacy & Security.

Small IEEE Logo