When it comes to video enhancement software, it pays to look for general software that can be used in a variety of configurations.
When it comes to video enhancement software, it’s not practical to reinvent the wheel for every single use case. There are so many different combinations of hardware and software that could potentially be used in drones, smartphones, wearable cameras, smart glasses and other cameras in motion. This is why it pays to look for video enhancement software that is general enough to be used in a wide variety of configurations.
Writing software is hard. Writing a high-performance video enhancement SDK is even harder. Taking the time to make sure your code doesn’t solve just one problem on one particular set of hardware takes even more effort. Looking towards the future doesn’t just mean keeping an eye out for what to do next. It also means planning ahead for the software you’re writing today, instead of only making easy gains for current, specific problems.
For example, drones are one of the hottest products in technology today. Their future seems bright both as consumer products and in expanding commercial applications. Vision processing capabilities that will enable the future of drones include collision avoidance, broader autonomous navigation, terrain analysis and subject tracking. Collision avoidance is not only relevant for fully autonomous navigation but also for “copilot” assistance when the drone is primarily controlled by a human, analogous to today’s driver-assisted systems in cars.
These key features are poised to expand the drone market of tomorrow by making drones more capable and easier to use. The algorithms that will be used, whether they exist today or will be researched in the years to come, are typically applicable to a wide range of problems. But a general problem domain isn’t enough. The implementation – the software itself – must also be general in order to unlock the future potential use cases for drones and similar cameras in motion.
The word hardcoding is frequently used in software development contexts to denote something that is written specifically for exactly one configuration. A simple CPU will have to modify them one by one, which can take significant time. The easy way out is to hardcode this option into the software, since it will always work, and there is always a CPU. But it’s also a very slow and time-consuming option.
Most CPUs have multiple cores. You can divide the big chunk of data into smaller subsets for each core that can be processed simultaneously, considerably reducing the time required. Some devices have graphics cards and some have even more specialized hardware, like FPGAs or DSPs. All of these can be leveraged to improve performance.
For both smartphones and drones, the cost, performance and power consumption of different subsystems are taken into account when designing a product. Size and weight are especially important for drones. Different technologies deliver different tradeoffs, and video enhancement software needs to be capable of adjusting to this easily, preferably even auto adjusting.
The best way to process the data also depends on other factors, such as if there’s other software running at the same time also in need of system resources. In short, setting up a render pipeline in a smart way takes more time than a hardcoded solution (in the short run), but in the long run it allows you to create a more efficient and scalable platform.
A typical implementation project often requires more than just installing a packaged product. There’s normally more work needed for integration and fine-tuned algorithms. Clients may request everything from customized products to pre-testing or characterization evaluation. The results are distilled down to a few important key variables.
A “semi-automatic” process centered on maintaining generality takes some effort in the short run but certainly pays off when scaling up in the long run. Ultimately, it’s easier to adapt the software to different devices, hardware configurations and client needs while making it easy to add additional features and serve more clients. And this is just barely scratching the service of what is possible.
In short, if you purchase some hardware or software only for video stabilization and then later need to integrate something new to enable object tracking or other features, this could be an expensive and time-consuming dilemma. A general, customizable video enhancement platform that can be integrated into different devices and configurations is a better answer. This will also allow you to easily add and upgrade performance and features over time as needed, which is key to future-proofing your video quality.
Johan Svensson – CTO
Johan holds a MSc in Engineering Physics from Umeå University and has experience from GE Healthcare’s organisation, where he had a number of senior roles in project management and product development. He has also held senior engineer roles in optics and sensor technology during his time at GE. Outside office hours, Johan is a skilled and enthusiastic photographer.
Contact Information:
Johan Svensson
johan.svensson@vidhance.com
www.weareimint.com
Patti Jo Rosenthal chats about her role as Manager of K-12 STEM Education Programs at ASME where she drives nationally scaled STEM education initiatives, building pathways that foster equitable access to engineering education assets and fosters curiosity vital to “thinking like an engineer.”