Breakout 3: Enabling Technologies for Automated Vehicles
Tuesday and Wednesday, July 11 & 12, 1:30 PM – 5:30 PM
- Carl Andersen, USDOT
- Stacy Randecker Bartlett, Ellis & Associates
- Jennifer Carter, HERE
- Rob Dingess, Mercer Strategic
- Dominique Freckmann, TE Connectivity
- Juhani Jaaskelainen, Independent
- Jim Misener, Qualcomm
- Sudararajan Sudharson, Booz Allen
- Valentin Scinteie, Kontron
- Virginia Stouffer, LMI
Vehicle automation has captured the hearts and minds of many, and the resulting prospect and promise of safety, mobility, convenience, comfort and a plethora of other potential benefits is indeed exciting. Just as exciting and crucial to envisioned applications are the enabling technologies that will literally and figuratively be under the hood.
This two-afternoon session will explore what is under that hood. It will begin with a series of moderated 45-minute “deep dives.” The deep dives will feature two invited short presentations, but most of that deep dive will be the interactive moderated dialog that follows.
At the end of the second afternoon and based on integrated experience, we will invite all partcipants to collectively synthesize the current state relevant enabling technologies and challenges and opportunities that remain. In the end, we hope to achieve:
- Views of technology needs for successful models of deployment of automated vehicles
- A set of research topics based upon the groups analysis of gaps.
- Significant individual take-aways to include understanding of the diverse enabling technologies and a holistic perspective on how they might combine into future automated vehicle systems.
Tuesday, July 11
1:30 PM – 1:45 PM Introduction: Concept, Program, Panelists and Organizers
Jim Misener, Qualcomm
1:45 PM – 2:30 PM Positioning and Localization
Moderator – Jennifer Carter, HERE
- Russ Shields, Ygomi
- Xinzhou Wu, Qualcomm
2:30 PM – 3:15 PM Cybersecurity
Moderator – Jonathan Petit, On Board Security
- Walter Sullivan, Elektrobit
- Harsh Patil, LG Electronics
3:15 PM – 3:45 PM Break
3:45 – 5:30 PM Digital Infraastructure I and II
Moderators – Robert Dingess, Mercer Strategic (Digital Infrastructure I) and Carl Andersen, USDOT (Digital Infrastructure II)
- Satoru Nakajo, University of Tokyo
- Maxime Flament, ERTICO
- Scott Nelson, HERE
- Paul Carlson, Texas A&M University
- Doug Dolinar, Limntech Scientific
Wednesday, July 12
1:30 PM – 2:15 PM Sensing and Perception
Moderator – Dominique Freckmann, TE
- Allaen Steinhardt, AEye
- Tony Han, JingChi.ai
2:15 PM – 3:00 PM On-Board Computational Technologies
Moderator – Valentin Scinteie, Kontron
- Jack Weast, Intel
- Wesley Shao, Baidu USA
- Tim Wong, NVIDIA
3:30 PM – 4:00 PM Break
4:00 PM – 4:45 PM Cellular/5G vs 802.11p-based Communications
Moderator – Jim Misener, Qualcomm
- John Kenney, Toyota ITC
- Tim Lienmueller, DENSO
4:45 – 5:30 PM Synthesis and Lessons Learned
Moderator – Jim Misener, Qualcomm
Speakers Name: Doug Dolinar/Bill Haller
Speakers Titles: (Dolinar) President; (Haller) VP Engineering
Speaker Organization: LimnTech Scientific Inc.
Presentation Title: Real-time Lane Marking Location - Dynamic Map Updates: A Road Maintenance Perspective (10 Minutes)
Enhanced GPS, laser and video technologies are increasingly used to automate manufacturing processes. This presentation centers on a new method of automating the installation of road markings in which a spatially accurate, "digital marking" location is captured as part of normal lane marking maintenance or installation operations. The process can be accessible via a cloud-based process to provide “real-time” reliable lane location data for ADS or dynamic mapping processes.
Key Takeaways: A spatially accurate "virtual" marking, captured as part of the installation or maintenance of lane markings, provides a unique approach to dynamic mapping data systems. The utilization of virtual and physical markings may alleviate a key concern relative to marking visibility due to inclement weather, road maintenance or road construction. This type of digital/physical infrastructure interfacing may enhance the value of various ADAS systems and provide a second-wave technology model for the development of road readiness levels for machine vision.
Speaker Bio: Doug Dolinar is President of LimnTech Scientific. A former aerospace engineer where he worked on space shuttle and rotary wing aircraft, he eventually transitioned to designing, and manufacturing high production pavement marking application and removal equipment. He holds a Bachelor of Mechanical Engineering degree from the University of Detroit.
Doug’s Contact Information:
Doug Dolinar, President
Speaker Bio: Bill Haller is Vice President of Engineering for LimnTech Scientific. After receiving his BS Engineering Physics and MS Electrical Engineering from Lehigh University, Bill co-founded SciTronics Incorporated – a company dedicated to developing voice prosthetic devices. He subsequently founded Industrial Vision Systems Inc. – a manufacturer of laser based machine vision products.
Bill’s Contact Information:
Bill Haller, VP Engineering
Speaker Name: Jack Weast
Speaker Title: Chief Systems Architect for Autonomous Driving Solution
Speaker Organization: Intel
Presentation Title: L3 – L5 Development: Balanced Compute for Sequential and Parallel Workloads.
The fully autonomous vehicle will need a tremendous amount of both parallel and sequential computing to support three intertwined stages of driving: perception, sensor fusion, and decision-making. Each stage requires different types of compute. The autonomous vehicles being tested today are early prototypes that have not yet been optimized for power and performance. Before system designers can achieve level 4 and 5 driving automation, they must determine how to best place different compute elements to support each type of workload. However, as system designs and artificial intelligence evolve, this placement can be a moving target.
No fixed architecture can keep pace with the breakneck speed of innovation in artificial intelligence and system design. GPUs have gained momentum in autonomous vehicle design due to their performance in image rendering and convolutional neural networks (CNNs), but are quickly becoming commoditized. Algorithms and neural network topologies are rapidly evolving, with major breakthroughs happening roughly every three to six months.
So what’s the right mix? In this panel we’ll discuss the technical advantages of all compute types for near- and long-term development, including GPUs, CPUs, FPGAs, and hardware acceleration.
Speaker Bio: Jack Weast is a Sr. Principal Engineer and the Chief Systems Architect for Autonomous Driving Solutions at Intel. In his nearly 20 career at Intel, Jack has built a reputation as a change agent in new industries with significant technical contributions to a wide range of industry-first products and standards that benefit from complex heterogeneous high performance compute solutions in markets that are embracing high performance computing for the first time. With an End to End Systems perspective, Jack combines a unique blend of embedded product experience with a knack for elegant Software and Systems design that will accelerate the adoption of Autonomous Driving. Jack is the co-author of “UPnP: Design By Example”, is an Associate Professor at Portland State University and his the holder of 19 patents with dozens pending.
About Intel Autonomous Driving Solutions
Automated driving will change lives and societies for the better, resulting in fewer accidents, greater mobility, and more efficient traffic flow. With the Intel® GO™ automated driving solutions portfolio for automotive, Intel brings its deep expertise in compute, connectivity, and the cloud to deliver solutions for automated driving.
Intel® GO™ automated driving solutions give OEMs incredible scalability and a flexible architecture that maximizes hardware and software reuse. This means OEMs can pursue countless design iterations, differentiate brands, and accommodate every market need and increasing levels of autonomy—while potentially lowering the cost of development and accelerating time-to-market. Solutions are built upon a foundation of security and functional safety to help protect drivers and passengers, as well as vehicle systems and data.
Speaker Name: Wesley Shao
Speaker Title: Principal Architect for Intelligent Driving Group
Speaker Organization: Baidu USA
Presentation Title: Apollo – Open Platform for Autonomous Driving
Baidu recently announced the Apollo program, an open platform to build the brain that can drive autonomous vehicles. In this panel, we discuss the components and capabilities of its first release, the requirements of its onboard computing environment, the partners, the sensors, and their roadmap. We will also highlight the ways to contribute and to participate the program.
Wesley Shao is a system architect for Baidu’s Intelligent Driving Group. He leads the hardware team that is responsible for the development of multiple generations of on-board computers and sensors of Baidu’s autonomous driving cars. Recently he is involved with the Apollo program, building industry’s first open platform for autonomous driving.
About Baidu Intelligent Driving Group
Baidu formalized its autonomous driving development in 2016 and has since then engaged in L3, L4 and V2X development. It demonstrated its autonomous driving technologies at the World Internet Conference in WuZhen, ZheJiang, China last year. Recently it initiated the Apollo program to foster an open community and ecosystem for autonomous driving development.
Speaker Name: Dr. Allan Steinhardt
Speaker Title: AEye Chief Engineer
Speaker Organization: Aeye, Inc
Presentation Title: Computer Vision Myopia: Looking to the Future
Many in the computer vision community use video analytics as a framework for developing advanced vision solutions. While this works for implementations like Facebook image recognition, it’s myopic to apply the same methodology for self-driving cars and robots. Video analytics assumes unlimited processing, massive data, high quality sensors, unlimited power and lots of time to analyze data. Robots offer no such luxuries, and their constraints require a different scientific approach.
In this session, AEye Chief Engineer, Dr. Allan Steinhardt explains robotic vision’s role in advancing safe, reliable vision for robots and vehicles. These machines must navigate safely with an objective purpose in mind, and do so with limited processing, lower quality sensors, limited data (due to rapid transit) and quick reaction time (low latency). The dynamic nature of vision for mobile robot sight presents many challenges, and Steinhardt will discuss how we can leverage other research community findings, including those from industrial engineering, missile seekers, energy harvesting and spacecraft GNC, in our quest for advanced robotic vision solutions.
Dr. Steinhardt is among the world’s most widely recognized and respected defense scientists. Prior to AEye, Steinhardt was Chief Scientist at Booz Allen, where he led a team of scientists, engineers, and mathematicians in providing prototyping, portfolio analysis, technology roadmaps, and innovation services to the Office of the Secretary of Defense, including DARPA, DTRA, and DDR&E. Prior to joining Booz Allen, Steinhardt held positions in National Laboratories (MIT Lincoln Laboratory, Radar Group), the Prime Contractor Defense Industry (BAE Systems), and academia (Cornell University, Assistant Professor in Electrical Engineering and Applied Mathematics). Steinhardt has published over 200 articles in academic and defense strategy journals, co-authored a book on Adaptive Radar, and held various leadership positions in the IEEE. He holds a bachelor degree in Mathematics, and graduate degrees in Electrical and Computer Engineering from the University of Colorado, Boulder. Steinhardt is a member of the National Academies' Naval Studies Board, the board of the Armed Forces Communications Electronics Association, and is a regional judge for the FIRST robotics competition.
Please return this completed form to Lindsay Voss at email@example.com no later than February 28, 2017.
Speaker Name: Tim Wong
Speaker Title: Technical Marketing for Autonomous Vehicles
Speaker Organization: NVIDIA
Presentation Title: AI platform for Self-Driving Cars
NVIDIA has created the DRIVE PX platform, the in-car AI supercomputer for autonomous driving. We envision running many deep neural networks (DNNs) simultaneously to drive the vehicle. One DNN could be a detection and classification network used not only for object detection (pedestrians, cars, trucks, motorcycles, bicycles, signs, lampposts, and even animals), but also for lane marking detection. Another would be a segmentation network which is useful to determine the free space around the vehicle that is available for driving, driving bounds (typically bounded curbs and medians), and blocking objects such as vehicles and pedestrians. A third DNN could be an end-to-end networks that mimic learned driving behavior. This can be used as a basic path planner to drive the vehicle under typical circumstances. These networks are working together to enable the vehicle’s artificial intelligence to drive the vehicle, while keeping the driver, its occupants and pedestrians nearby safe.
Tim leads the Self-Driving Technical Marketing team for NVIDIA. He works with our customers, partners and suppliers to enable autonomous vehicle technology using our DRIVE PX platform and artificial intelligence for high-quality, robust and flexible self-driving solutions. Before joining NVIDIA, Tim was the president of the MHL Consortium where he drove the adoption and licensing of MHL technology into more than 900 million smartphones, displays and accessories. Tim received his B.S. in Computer Engineering from Boston University in 1986, his M.S. in Computer Science from the University of Southern California in 1988, and his Mini-MBA in Finance from the Wharton School at the University of Pennsylvania and the AT&T School of Business in 1995.
NVIDIA is known for inventing the GPU, which enables modern computer graphics -- simulating human imagination and conjuring up the worlds of video games and films. Today, NVIDIA GPUs also simulate human intelligence, running deep learning algorithms and acting as the brain of computers, robots, and self-driving cars.
Autonomous vehicles require Artificial Intelligence to navigate the nearly infinite range of possible scenarios. Thanks to AI, cars can learn how to drive. NVIDIA AI platforms offer a cloud-to-car solution, with NVIDIA systems training deep neural networks in the datacenter, and NVIDIA DRIVE™ PX running in the car to process sensor data and drive safely.
NVIDIA is working with the world’s leading companies, including Toyota, Mercedes-Benz, Audi, Volvo, and Tesla. By engaging with over 225 automakers, tier 1 suppliers, mapping companies, startups and research institutions, NVIDIA AI technology is revolutionizing the way people drive, and empowering vehicles to drive themselves.
Speaker Name: Russ Shields
Speaker Title: Chair
Speaker Organization: Ygomi LLC.
Russ Shields is Chair of Ygomi LLC. Businesses that Mr. Shields has founded and/or led include Shields Enterprises International, Cellular Business Systems, Inc. (later Convergys), Navigation Technologies (later Navteq, now HERE), and the current Ygomi companies – SEI, Connexis, and ArrayComm.
Mr. Shields is a Board Member of the ITS World Congress and Co Chair of the ITU Collaboration on ITS Communications Standards. He is also a member of the National Space-Based Positioning, Navigation and Timing Advisory Board, a Presidential advisory committee.
Mr. Shields is an SAE Fellow and recipient of the SAE Delco Electronics ITS Award. He was inducted into the inaugural class of ITS America’s Hall of Fame and named the first U.S. member of the ITS World Congress Hall of Fame.
In 2008, Mr. Shields received the University of Chicago Booth School of Business Distinguished Alumni Award in Entrepreneurship. In 2013, the Hotchkiss School awarded Mr. Shields its Alumni Award.