Computer Vision Weekly – Late January


Hi everybody,

We start our series of weekly news digests dedicated to computer vision technology. We will find only the newest news and the “tastiest” articles as well as upcoming events. If you have any suggestions, don’t hesitate to write us!

So… let’s go!


1.Google will bring Project Tango’s Computer Vision to Robots and Drones


Google has recently featured its Project Tango Virtual Reality Platform on a new Lenovo smartphone. The technology gives Android devices the ability to navigate the physical world with a new kind of spatial perception by adding advanced computer vision, image processing, and special vision sensors. The price for the device is yet to be known, but the release date is planned for summer.

Levono are inviting developers to submit their ideas for gaming and utility apps created using Project Tango by February 15, 2016.

Google has also announced that it wants to extends its computer vision platform beyond phones and tablet to robots, drones and other devices as there is a huge impact potential there.

Devices can acquire a wealth of information (measure distances, recognize items, create models of 3D objects, map location, etc.) on location and objects in view with Project Tango, which is a hardware and software platform. Relevant information is shown in real-time on screen. [Google wants to bring Project Tango’s computer vision to robots, drones]

2. CES 2016 News: Intel Drone avoids any Obstacles…even falling Trees

Chip-maker Intel has revealed a collision-avoiding drone Typhoon H that can automatically dodge different obstacles on its way. The Yuneec, the company that is partially owned by Intel, has demonstrated its drone with its device’s 3D camera sensor. The camera is using the RealSense technology, which exploits infrared lasers to detect the distance of nearby things.

“A central part of our mission is to bring new and advanced creative possibilities within the reach of everyone,” said Yu Tian, chief executive officer of Yuneec International. “We’ve engineered the Typhoon H to redefine what customers should expect to pay for a drone with such an array of professional features. At this price point, no other drone comes close to the Typhoon H in terms capability and value.”

Some of other drone’s advantages are that the Typhoon H shoots 4K video and 12-megapixel photos, which challenges DJI’s Inspire 1 drone with the same camera characteristics.

The drone is expected to be on sale within six months and its cost will be around $1,799 (£1,200). [CES 2016: Intel drone dodges ‘falling tree’ on stage]

3. Sighthound Releases Sentry D Computer Vision Engine for Drones

Sighthound, Inc.has announced the release of Sighthound Sentry D, a version of its Sentry computer vision engine specifically designed for flying cameras. The system allows a drone to follow people or objects autonomously based on what it sees.

Sighthound’s software can be deployed on embedded devices, computers or in the cloud which uses computer vision to interpret the video stream from a camera feed. The Sentry D variant has been specifically adapted to be able to detect, classify and track people and objects from moving cameras.

“The Sentry D computer combined with the advanced imaging and video capabilities of Ambarella’s latest Ultra HD A9SE system-on-chip, it will enable autonomous follow-me capability in the next generation of flying cameras,” said Stephen Neish, Sighthound’s CEO.

Sighthound also provides Sighthound Cloud, a comprehensive set of computer vision APIs for developers, and Sighthound Video, advanced computer vision security software for homes and businesses. [Drones That See]


Pick of the Week:

The Future of Real-Time SLAM and “Deep Learning vs SLAM”


A great post provides a brief introduction to SLAM, a summary of the International Conference of Computer Vision (ICCV), and some takeaway messages about the Deep Learning versus SLAM. The basic idea is that today’s SLAM systems are large-scale “correspondence engines” which can be used to generate large-scale datasets, precisely what needs to be fed into a deep ConvNet. If you feel that author’s overview of 7 sessions is not enough for you, check ICCV 2015 Youtube channel. [The Future of Real-Time SLAM and “Deep Learning vs SLAM”]


Upcoming Events

ICPRAM: The International Conference on Pattern Recognition Applications and Methods [Register Here]

CVIP: The International Conference on Computer Vision and Image Processing [Register Here]

International Conference of Pattern Recognition Systems [Submit Paper Here]

International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision [Submit Paper Here]

Leave a Reply

Your email address will not be published. Required fields are marked *