Sony Outstanding
Engineer Award 2019

The Sony Outstanding Engineer Award, established to further inspire engineers to take on new challenges,
is the highest form of individual recognition for Sony Group engineers.
In order to develop products and services that appeal to customers' sensibilities,
there is a wide range of technologies that Sony will have to work on.
In addition to elemental technologies, there is also a need to integrate creative new technologies, and to optimize complex systems.
This section introduces the winners of the Sony Outstanding Engineer Award 2019,
who have actively addressed these challenges and achieved significant value creation.

Contribution to Space Communication Infrastructure with Small Optical Link for International Space Station (SOLISS)

Kyohei Iwamoto and his team jointly developed the long-distance laser communication system "Small Optical Link for International Space Station" (SOLISS) with the Japan Space Exploration Agency (JAXA), based on the fine-pointing technique of the optical disk technology. This laser communication system aims to establish a real-time, mass-data communication system for future inter-satellite communications and communications with ground stations. He designed the SOLISS flight system over about three years, and SOLISS is now installed in the Exposed Facility of the Japanese Experiment Module "Kibo" on the International Space Station, and it is successfully demonstrating its functions in orbit.

Development of CMOS Image Sensor
with PSD Structure

The back-illuminated CMOS image sensor has increased its use in recent years, not only in the imaging area, but also in the sensing area. As a leader to drive the process development of this image sensor, Itaru Oshiyama has been engaged throughout the project, from element development to mass production development, and has developed the world's first new technology, called the Pyramid Surface Diffraction (PSD) structure. This technology forms pyramid-shaped irregularities on the surface of the photodiode, and improves infrared light sensitivity of the CMOS image sensor by 50% compared with the conventional structure. Its mass production has also been achieved. This technology was announced at IEDM, which is one of the most authoritative international semiconductor conference, greatly contributing to showing the high competitiveness of Sony products.

Cross-sectional view of IMX332

Development of New Device for TWS Headphones and Launch of WF-1000XM3

Manabu Kanda developed a new device for True Wireless Stereo (TWS) headphones and integrated it to Sony's wireless noise-canceling stereo headset WF-1000XM3. This device adopts a new Bluetooth transmission technology that performs sound transmission to left and right simultaneously, instead of relaying data between left and right in the conventional way, and achieves great improvement in connection stability, low sound latency, and power consumption. As the first product equipped with this device, WF-1000XM3 gives the industry-leading performance and has achieved approximately twice the planned sales. It received many awards in the industry and has become one of the benchmark models in TWS market. It is greatly contributing to Sony's presence.

Devising New Pixel Architecture and New Process for Automotive CMOS Image Sensor and Contribution to Its Commercialization

While the automotive CMOS image sensor market is expected to grow with advanced driver assistance systems (ADAS) and autonomous driving (AD), the existing high dynamic range (HDR) technology using the time-division multiple exposure method has the issues of LED flickering and moving object artifacts. To address these issues, Yorito Sakano has devised a new pixel architecture and new process for automotive CMOS image sensors, and contributed to the commercialization of this technology. His achievement enables HDR shooting and LED flicker suppression to be realized at the same time, leading to highly precise recognition of objects in a wide variety of lighting conditions from moonlight to sunlight. This technology has been announced at the International Solid-State Circuits Conference 2020 (ISSCC2020).

*HDR (High Dynamic Range), LFM (LED Flicker Mitigation)

Development and Service Launch of New Object-Based Music Experience 360 Reality Audio

People are listening to conventional stereo music more than half a century. To change this traditional music listening to immersive spatial sound experience, Tokihiko Sawashi has developed 360 Reality Audio (360RA) using Sony’s object-based spatial audio technology. He established the 360RA ecosystem by providing key technologies for content creation, music distribution, smartphone app, and headphone playback, and also collaborated with various artists and creators to provide a new content production method that allows them to freely arrange sound sources in a 360-degree sound space. This achievement has opened a new way for musical expression. He realized 360RA experience on the headphones and speakers, contributing to expansion of enjoyable music environments.

Development of Precise Robotic Forceps Equipped with an Optical Fiber-Based Force Sensor

Hiroyuki Suzuki developed precise robotic forceps equipped with an optical fiber-based force sensor. By dramatically reducing the force-sensing noise during the robot’s motion, he achieved the world‘s first* optical fiber-based precise bilateral control system with extremely high transparency. In the project, he has been engaged in 1) defining the robot specification, including medical requirements, 2) developing precise robot forceps, and 3) developing small and highly sensitive force sensor using an optical strain sensor. His achievement has realized precise manipulators that can perform delicate operation while directly sensing the applied force, while at the same time, contributing to Sony's presence through his papers announced in the academia as well as articles on news media and technical magazines reporting this technology.

*Based on internal testing

Development of Predictive Analytics Software "Prediction One" and Deployment inside and outside Sony Group

Predictive analytics is an AI technology that uses machine learning to predict future outcomes from data. This technology can be used in a wide range of areas, but since it requires high expertise, it cannot be easily introduced with lack of AI experts. To address this challenge, Shingo Takamatsu and his team created a GUI-based predictive analytics tool "Prediction One," which enables even non-AI experts to run predictive analytics easily. For this tool, he developed the technology that automatically performs high-level predictive analytics at high speed in a lightweight manner, and the technology to visualize reasons for predictions. Prediction One is now used by many Sony Group businesses, and it has started its external service as a new business since June 2019.

Development of XD Linear Motor and Realization of High-Speed AF for E-Mount Lenses

Tetsu Tanaka developed the XD (extreme dynamic) Linear Motor, which achieves three times the thrust efficiency of conventional motors, by thoroughly revising the linear motor design and component layout. The XD Linear Motor ensures frequency response characteristics that can withstand high-speed driving and realizes more stable focus driving, making it possible to offer 5x moving object tracking performance and 1.5x faster autofocus time than competitors. With its small size and low power consumption, the XD Linear Motor can be easily optimized for each model and is now employed for all E-mount lenses, contributing to increasing autofocus speed and miniaturizing interchangeable lenses.

Leadership in Open Source as a representative of Sony Group

Through 25 years of activity in the Linux kernel, and 8 years of leadership in the Linux Foundation, Tim demonstrated Sony’s commitment to being a good member of the Open Source community, and contributed to Sony’s reputation within the Open Source community. In 2019, Tim completed his term of service in the Linux Foundation Technical Advisory Board, and in early 2020, he was elected to the Linux Foundation Board of Directors.

Solution for Creating Photorealistic Volumetric Scenes for Use in Virtual Production

Tobias and his team invented and developed the method for selectively manipulating and merging multiple volumetric data captures, and the method for real-time 3D volumetric visualization of 2D local color adjustments. These technologies not only enable the creation of photorealistic volumetric scenes for use in virtual production, TV, movies, and entertainment, but also help lower the cost of production by reducing the time required to build physical sets and travel to locations.

Sony Innovation Studios virtual production demo at CES 2020 powered by Atom View

Development of Face/Human/Object Detector and Contribution to aibo and Xperia

Sayaka Nakamura has developed a multiclass object detector that can run on edge devices because of its extremely small footprint. Its proprietary neural network architecture enables object recognition to be achieved with high throughput yet high accuracy. By using a wide variety of situation-based data and original learning algorithm, it can detect persons only from their legs or detect faces even wearing a mask or sunglasses. On aibo, it can detect objects to trigger aibo's various actions, such as recognizing owners, playing with toys, and going to the battery charger. On Xperia, it is used to detect eyes of humans and animals as the first Eye AF system integrated on smartphones.

Development of High-Quality Volumetric Capturing Workflow for Entertainment Business

Volumetric capturing is the technology that captures a three-dimensional space of the real world, including a place and human performance, and integrates the information from multiple cameras or sensors into 3D digital data so that it can be reproduced as video that can be viewed from any angle or viewpoint. To enhance this technology, Yoichi Hirota developed a novel rendering workflow that can produce higher quality free-viewpoint video in a short period of time. This achievement is already utilized in the live performance of famous musicians and TV advertisement for a Japanese major company, creating a great potential as a new video product approach in entertainment business. It is highly expected to further expand its use not only in the entertainment business, but also in other domains, such as sports analysis, education, medical care, and construction.

Image-Based Personalization of Head Related Transfer Function for 360 Reality Audio

The head-related transfer function (HRTF) is one of the most important keys to create a spatial sound field with headphones. Kazumi Fukuda has developed an algorithm and a system which can optimize the HRTF for individual users by simply taking a photograph of the user's ear with a smartphone. The achievement of this image-based HRTF personalization system has made it possible to deliver truly immersive surround sound to many users. He led the development in all phases, including algorithm design, proof of concept, system establishment, data collection, and performance improvement, and has greatly contributed to 360 Reality Audio, which allows listeners to have a new music experience feeling fully immersed in sound.

Contribution to Entertainment Business
with Lightweight Motion Capture Technology

Motion capture is an effective technique to reproduce a CG character’s motion in a realistic and human-like manner. It captures and digitizes human motion so a computer can handle it as data, but since it requires a studio setup, it is often difficult to be employed for various use cases. In order to realize an easy-to-use system, Yasutaka Fukumoto and his team developed the lightweight motion capture technology, which uses only six sensors attached on the body to estimate three-dimensional pose. With the minimal number of sensors, it can be easily used anywhere, indoors or outdoors. It has been demonstrated in some events, providing visitors with an entertainment experience in a virtual world, and appraised with positive feedback. This technology promises to bring better usability to both creators and users of 3D content, such as games, films, and VR/AR.

Deliver an experience where anyone can instantly be a "hero" character
© 2017 REKI KAWAHARA/KADOKAWA CORPORATION AMW/SAO-A Project
© BANDAI NAMCO Entertainment Inc.

Contribution for NIR Sensitivity Enhancement of
BI-CIS with Light Trapping Pixel Technology

Sozo Yokogawa developed the light trapping pixel technology with diffractive structures on the crystalline silicon (c-Si) surface. This technology aims to solve the conventional issue of back-illuminated CMOS image sensor, in which the photo-absorption layer on the thin c-Si substrate makes NIR sensor sensitivity low. To address this challenge, he devised an approach to create a diffractive structure on the c-Si surface, instead of physically increasing the thickness of the substrate, so that the incident light is diffracted at the surface and totally reflected at the pixel boundary due to large refractive index difference. As a result, the effective optical path-length has elongated and increased NIR sensitivity in the wavelength of 850 nm to 940 nm twice compared to conventional technologies. This innovative approach is widely used in Sony products and other company‘s products, and received the Walter Kosonocky Award in 2019.