NTW Logo (Black)

About IEEE

IEEE Membership

Products and Services

Conferences

IEEE Organizations

 

IEEE Nav Bar

 


 

 

https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/onepixel.gif

 

IEEE Signal Processing Society Santa Clara Valley Chapter


https://www.ieee.org/graphics/onepixel.gif

https://www.ieee.org/graphics/onepixel.gif

 

 


Click here for see the full list of upcoming events.


Wednesday, Dec 09, 2015

Ph.D. Elevator Pitch to Professionals - Autonomous Driving, Computational Cameras, Scalable Speech Coding from Stanford, UC Santa Cruz and Santa Clara University

This meeting is hosted/sponsored by IEEE SPS Chapter


Location:

   AMD Commons C-6/7/8, 991 Stewart Dr., Sunnyvale, CA (map or Google Maps)

 

Schedule:

   5:30pm - Check-in

   5:55pm - Meeting kick-off

   6:00pm - Guest speaker

   6:15pm - PhD talks

   7:00pm - Poster session, food, and networking

   8:00pm - Adjourn

 

Abstract:

Looking for new and local talents? Want to keep yourself and your company up-to-date on the latest hot topics and technical contributions in Signal Processing? You can do both by attending the "Ph.D. Elevator Pitch to Professionals" event.


The IEEE Signal Processing Chapter of Santa Clara Valley is organizing an event to connect Ph.D. candidate close to graduation and newly graduated Ph.D. to local companies looking for talents and new technologies. A panel of students will explain their Ph.D. contributions and results in the form of elevator pitch, followed by Q&A and a social event to continue the conversations into poster session.


Guest Speaker:

Alex Acero, President, IEEE Signal Processing Society

Talk Slides (5 MB)


Panel:

David Held, Computer Science Department, Stanford University
Robotics has the potential to create self-driving cars that can save millions of lives or personal robots that can enable people with disabilities to live independently. Unfortunately, most robots today are confined to operate in controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I propose that robots can learn to be more robust by modeling the relationship between appearance and temporal information. By developing methods that can understand how the appearance of the world changes over time, we can train robots to be robust to the different ways in which the appearance of objects can vary. I demonstrate this idea on a number of robotics applications, including 3D tracking and segmentation for autonomous driving as well as 2D tracking with neural networks. By building models that can understand the different causes of appearance changes, we can make our methods more robust to a variety of challenges that commonly occur in the real-world, thus enabling robots to come out of the factory and into our lives.

Talk Slides (140 MB)

Amin Kheradmand, Department of Electrical Engineering, University of California at Santa Cruz
Most real pictures taken with existing cameras exhibit some level of degradation. Restoration algorithms aim to remove undesired distortions like blur and/or noise from the input degraded image. We have developed a general regularized framework for image restoration. This framework is based on graph representation of the underlying image using the kernel similarity and Laplacian matrices. These matrices contain the similarity information among different pixels in the image. Therefore, they are able to encode the structure of the image defined on the vertices of the graph, and can be used as effective tools for representing and analyzing images. The proposed image restoration framework is based on optimizing the corresponding cost function constructed from our definition of the Laplacian matrix for regularizing the inherently ill-posed deblurring and denoising problems. We have shown the effectiveness of the proposed approach for image restoration through several synthetic and real examples. We have also introduced an effective method for sharpening noisy and moderately blurred images based on the concept of data-adaptive difference of smoothing operators.

Talk Slides (8 MB)

Koji Seto, Department of Electrical Engineering, Santa Clara University
Most recent speech codecs employ code excited linear prediction (CELP). The CELP technique utilizes the long-term prediction across the frame boundaries and therefore causes error propagation in the case of packet loss and need to transmit side information in order to mitigate the problem. The internet low bit-rate codec (iLBC) employs the frame-independent coding and therefore inherently possesses high robustness to packet loss. However, the original iLBC lacks in some of the key features of speech codecs for IP networks: Rate flexibility, Scalability, and Wideband support. In this work, we added these missing functionalities to the iLBC to develop novel scalable speech codecs for IP networks. The rate flexibility was added to the iLBC by employing frequency domain coding and the scalable algebraic vector quantization. The bit-rate scalability was obtained by encoding the weighted iLBC coding error for the enhancement layer. The bandwidth extension technique was employed to provide wideband support. The wavelet transform was also used to further enhance the performance of the proposed codec. The performance evaluation results show that the proposed codec provides high robustness to packet loss and achieves equivalent or higher speech quality than state-of-the-art codecs under the clean channel condition.

Talk Slides (0.5 MB)










Subscribe to future announcements: link


 

https://www.ieee.org/graphics/onepixel.gif