By Steve Calechman
Automation has been around since ancient Greece. Its form changes, but the intent of having technology take over repetitive tasks has remained consistent, and a fundamental element for success has been the ability to image. The latest iteration is robots, and the problem with a majority of them in industrial automation is they work in fixture-based environments that are specifically designed for them. That’s fine if nothing changes, but things inevitably do. What robots need to be capable of, which they aren’t, is to adapt quickly, see objects precisely, and then place them in the correct orientation to enable operations like autonomous assembly and packaging.
Akasha Imaging is trying to change that. The California startup with MIT roots uses passive imaging, varied modalities and spectra, combined with deep learning, to provide higher resolution feature detection, tracking, and pose orientation in a more efficient and cost-effective way. Robots are the main application and current focus. In the future, it could be for packaging and navigation systems. Those are secondary, says Kartik Venkataraman, Akasha CEO, but because adaptation would be minimal, it speaks to the overall potential of what the company is developing. “That’s the exciting part of what this technology is capable of,” he says.