This technology recognizes what the input video shows and outputs the category label associated with it.
This technology improves image quality by outputting an image with a higher resolution than the input image.
This technology paints areas in the input image with different colors so that each pixel is associated with a category it belongs to.
This technique estimates the positions of human joints and major points (neck, shoulders, elbows, wrists, ankles, etc.) and their connections from images.
Action Recognition describes how each image pixel moves between consecutive frames. By grasping the flow of successive frames, we can know the relative motions of objects in the video footage.
The technology outputs a rectangle enclosing each of objects in the input image, and can be used for detecting where people, faces, cars, signs, etc. are located in the image.
This technique detects “anomalous” data that behaves differently from the majority of data.
It outputs the location of the objects in the consecutive image frames without losing sight of them.