"Spinal reflex" enables rapid feedback and response
Real-time Image Data Analysis
In today's automated, "smart" world, transportation, security, agriculture, healthcare and factories all need things done in real time, with latency of just a few milliseconds or less. In an industrial system where image sensors collect data, image recognition platforms recognize/analyze data, and robotic systems handle control/movement commands, image sensors and AI handle advanced image recognition/analysis while high-speed vision sensors handle rapid feedback and response. It's almost like the relationship between the human brain (AI) and spinal cord (high-speed vision sensors).
Captured video data is processed inside the sensor
Abe: Simply put, high-speed vision sensors detect and track objects. Normally, image sensors transfer data captured on the device side to an external GPU/CPU for processing, but high-speed vision sensors perform this processing internally. As this minimizes data transferred to the outside, the entire system can be faster and physically smaller. Incidentally, while a typical image sensor can capture and process 30 to 60 frames per second, our high-speed vision sensor can capture and process 1000 frames per second, nearly 10 times that amount. Changes in the color and brightness of target objects are confirmed 1000 times per second, which means the system can react to remarkably fast movements.
Nose: Think about trying to use a camera to take a picture of a fast-moving object, but it gets away, or about when a robot cannot function properly because its response time is too slow. These are problems with the rate of speed on the sensor side. A desire to improve this situation was the original motivation for developing this technology.
*High-speed vision sensors applied the technology which is jointly developed with University of Tokyo Ishikawa Laboratory
High-speed vision sensors realize amazing experiences
Abe: Many people experienced our high-speed vision sensors deployed in the A(i)R Hockey table game which was showcased at the 2018 South by Southwest (SXSW) conference. And during the interactive exhibition "High Speed Colors" at the Sony Square Shibuya Project in 2020, projection mapping was performed on the bodies of Mini 4WD* remote control cars moving at high speeds-a hugely challenging feat. The outstanding rapid response performance of our high-speed vision sensors is expected to find similar uses in the entertainment world. Additionally, in the eye-catching "Sports & AI Project" exhibition at the 2020 Consumer Electronics Show (CES), two high-speed vision sensors observed a ping pong ball flying between players, and the intense back and forth was reproduced in real-time 3DCG. This system, which visualizes the movements and techniques of players as well as the trajectory and rotation of the ball in 3D, could help make sports broadcasts more fun, and help athletes hone their skills. Moreover, in the Eye-sensing Light Field Display, also exhibited at the 2020 CES, by recognizing an observer's line of sight from the image captured by high-speed vision sensors, it is possible to look around a 3D image from a free viewpoint without any blurring.
*Mini 4WD is a registered trademark of Tamiya Incorporated
AI and high-speed vision sensors embody "getting closer to people"
Abe: Metaphorically speaking, the AI is the brain, and the high-speed vision sensors are the spinal cord. By combining these two technologies, we can achieve human-like functionality. For example, AI can judge, monitor, and analyze situations, high-speed vision sensors can detect moving objects, track pedestrians, and spinal reflexes can avoid collisions, making it possible to get closer to a "non-collision vehicle". This is just one way that the combination of AI and high-speed vision sensors has major potential in various areas, starting with the automation of factories as well as drone and robot operation.
Nose: The role of the high-speed vision sensors is to detect objects before sending any data to the AI. The AI, meanwhile, is good at analyzing the data from the sensors and identifying differences, for example whether an object is a person or dog. By combining the two, performance is thus optimized. In the future, the number of objects that can be identified while maintaining the rapid rate of the high-speed vision sensors will continue to increase. Sony has already commercialized an intelligent vision sensor with AI processing functionality. It is an image sensor, created for a purpose separate from that of our high-speed vision sensor, but in any event, by combining the strengths of different technologies, we believe it will be possible to create even more advanced sensors going forward.
Real-time recognition means more convenience and more fun
Abe: We believe that increasing the interactions between devices and people will drive innovation and create a society of greater convenience and prosperity. Highly responsive interaction becomes possible when a device can capture data and then process it in real time, all by itself. The development of large AI platforms is obviously important, but it is equally important to develop something on the device side that can analyze data instantly and lightly. Our high-speed vision sensor is able to recognize movements in real time. We would also like to utilize this technology to develop contactless interfaces—looking further ahead, we believe we can help create a future where physical switches do not even exist.
Nose: The high-speed aspect of the sensor is important for the realization of augmented reality (AR) and virtual reality (VR). Current VR headsets offer a realistic experience, but we believe we can provide even higher quality entertainment by reducing the lag between the movement of the user's head and the image displayed. Further improvements in sensing technology will be required to achieve this. As a member of Sony that values fun and entertainment, I will continue striving to create sensing technology that works in real time.